text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
The metropolitan and the Theban silk industry: a hypothetical reconstruction*
Many theories have been proposed to explain the success of the Theban silk industry from the twelfth century onward. To contribute to this discussion in the context of recent research developments, this article explores the Theban metropolitan ’ s hypothetical contribution to the industry through the case study of John Kaloktenes, who initiated a series of projects during his tenure (before 1166 – c.1190). The analysis of three of these projects suggests that they might have been designed to support the industry. Thus, this article proposes the working hypothesis that Thebes ’ s industrial success might have benefited substantially from the local metropolitan ’ s active promotion.
likely taking Thebes as a reference, as he did on one other occasion. 11 Thus we can conclude that the Theban silk industry's success relied primarily on its outstanding execution of weaving; there is no clear evidence that Thebes maintained an edge over other silk industrial centres in dyeing. 12 Apart from natural conditions, Thebes's proximity to potential investors and thriving markets has also been brought up: the city not only was the seat of the theme of Hellas and the Peloponnese and a metropolis but maintained convenient overland transportation links and a growing population. 13 The viewpoint has its merits when formulated more specifically. First, Thebes's advantageous administrative status did generate privileges: it enjoyed exemption at least from the fleet tax (πλωίμους), which was imposed upon cities like Athens. 14 We may assume that with a lighter tax burden, Thebans might have had better means to purchase silk products and invest in the silk industry. Secondly, Thebes's inland location would have made it an inferior transportation hub compared to nearby littoral settlements with strong commercial engagement, such as Corinth, Chalkis, Vonitza, and Athens. Nevertheless, as piracy increasingly plagued the region's littoral areas from the late eleventh century, 15 the inland location would have ensured relative security and facilitated the industry's growth. 16 A testimony to this geographical leverage is the city's neglected defensive infrastructure, which stood at odds with its administrative importance. 17 Thirdly, regarding population, we should keep in mind that in the twelfth century, Thebes ranked lowest among the regional metropolitan seats, after Corinth, Athens, and Naupaktos. 18 Its population was estimated at only between 4000 and 5000, 19 while contemporary Thessalonike may have reached 150,000 and Constantinople between 300,000 and 400,000. 20 It seems that Thebes was a relatively small settlement with limited ecclesiastical prestige, restricted in local human resources, market access, and investments. A more compelling impetus in terms of the population seems to have been the city's immigrants. Those from southern Italy, where sizable moriculture and sericulture industries existed, may have brought their industrial resources and experience. 21 Women like the Naupaktians who came to Thebes during the first half of the eleventh century may have contributed to the industry's female labourers. 22 The sizeable Jewish community in Thebes, which had participated in silk manufacture according to Benjamin of Tudela's testimony, was probably also the result of recent immigration. 23 The Jewish population reached two thousand in the 1160s, making it the empire's second-largest Jewish community (after Constantinople, c.2500) and undoubtedly made up a significant proportion of the local inhabitants. 24 Recently, Theresa Shawcross, in a study focusing on the metropolis of Athens, has convincingly shown that metropolitans in Greece during the decades before the Latin conquest were committed to improving their dioceses' economic status. 25 Meanwhile, underexplored evidence concerning early thirteenth-century Naupaktos indicates that the episcopal administration, led by its metropolitan, may have played a prominent role in fostering the local silk industry: it not only had strong demand for silk products but owned many of the local mulberry plantations and maintained directly affiliated groups of textile artisans, some of whom possibly oversaw textile production for the metropolitan episcopate. 26 Regarding the Theban silk industry, a major economic sector for Thebes, 27 the above research developments imply that the metropolitan may have played a role in its success. Existing scholarship often speculates that the profit-driven local aristocrats were the primary promoters of the Theban silk industry: they were not only raw material suppliers but entrepreneurs who provided the workshops, housing, industrial implements, and salaries for the silk artisans and marketed the end products. 28 In this context, the above metropolitan hypothesis brings a new and promising perspective in interpreting the Theban silk industry, from which we may also draw inferences to further our understanding of the Byzantine silk industry as a whole. In what follows, we will take this hypothesis into serious consideration, exploring the hypothetical contribution of the metropolitan of Thebes to the silk industry by seeking possible corroborations from surviving historical sources.
John Kaloktenes and Thebes
Thebes rose to the status of metropolis in the tenth century. 29 Several metropolitans of Thebes are attested in the sources, but in most cases we only know their names at most. 30 Fortunately, one of them is relatively well attested: John Kaloktenes, who is first mentioned as the metropolitan of Thebes in 1166 and probably died between 1186 and 1193. 31 Our knowledge of Kaloktenes derives from scattered sources. The manuscripts containing the earliest known vita and akolouthia of Kaloktenes were lost during the Greek Revolution. The two existing versions of the vita and akolouthia, one included in a manuscript and the other in a published brochure, were nineteenth-century reproductions based on the lost manuscripts. 32 Essentially modern works, their credibility should be treated with caution; where their accounts are supported by quotations from the lost manuscripts or corroborated by archaeological findings, the information they record can be considered reliable. Through his name and the metropolitan title, Kaloktenes has also been identified in various records from the second half of the twelfth century: a participant in councils convened in Constantinople, 33 a clerical personality remembered in the synodikon of the local church, 34 the addressee of a letter from Michael Choniates, 35 and the owner of three surviving seals. 36 Regarding his projects during his metropolitan tenure, some are implied in the extant versions of his vita and akolouthia. Others were recorded by his contemporaries, including John Apokaukos (c.1155-1233) and Theodore Balsamon (1130s-1195). In addition to the existence of available sources, John Kaloktenes also happens to be an ideal subject for our investigation. His tenure largely coincided with the historical stage Shawcross has discussed, and it predated the documented Naupaktian silk industry by just a few years, thus constituting a compelling case for comparative examination. Kaloktenes' time as metropolitan witnessed the apogee of the Theban silk industry. By the time he assumed the metropolitan see, sometime before 1166, Thebes must have recovered from the disastrous Norman raid of 1147, and in the 1160s, Benjamin of Tudela could attest to a large Jewish community of silk and purple textile artisans in the city. 37 In the instructions the Genoese authorities gave their envoy in the early 1170s, the right to trade silk in Thebes, previously enjoyed by the Venetians, was among the privileges demanded of the Byzantine emperor. 38 In the mid-1180s, Thebes was described as a primary place of origin for textiles used in Constantinople. 39 In 1195, shortly after the death of Kaloktenes, Theban silk textiles were considered by the Turks the best in Byzantium. Simultaneously, the Byzantine emperor had been receiving a sizable yearly contribution of silk textiles from Thebes. 40 These indicate that Thebes had now surpassed Constantinople as the empire's primary centre of the silk industry. Apart from textual sources, coins recovered from the archaeological site of a silk workshop in Thebes provide additional evidence for the industry's developmental trajectory. 41 While they span from the ninth/tenth to the fourteenth century, roughly half of those dated before 1204 come from the reign of Manuel I Komnenos (1143-85). 42 Thus, the workshop's Byzantine history must have culminated around this period, which coincides with Kaloktenes' tenure. In this case, if supporting the silk industry was metropolitan policy, we are very likely to find its traces during Kaloktenes' term.
Kaloktenes' initiatives
John Kaloktenes' major initiatives can be summarised as follows. From the testimony of his vita and akolouthia, he constructed an aqueduct in Thebes as well as a church for the Mother of God, which may have functioned as the metropolitan church. 43 He also converted a part of the Theban Jewish population to Christianity and contributed to the city's philanthropic infrastructure, including the foundation of a public retirement home, poor-houses, and hospitals. 44 can be corroborated by additional evidence, 45 for which reason we will include only them in our investigation. From John Apokaukos, we learn that Kaloktenes converted a male monastery in Thebes into a convent and named it the convent of Dekane after the abbess he appointed. 46 Theodore Balsamon attested that Kaloktenes appointed new bishops and established in Thebes a parthenon (παρθενών), a female religious foundation accommodating virgins. 47 Apart from sponsoring the church foundation and appointing bishops, none of these projects was a common endeavour for Byzantine metropolitans. The aqueduct in Thebes is among the few, if any, attested aqueducts from the middle Byzantine period. 48 Kaloktenes' conversion of a male monastery into a convent is the only concrete case of a metropolitan undertaking such conversions. 49 According to Balsamon, parthenones must have been almost extinct by the time of Kaloktenes, since they no longer existed even in Constantinople, the empire's paramount centre for female religious foundations. 50 Kaloktenes was probably the first founder of a parthenon in a long time. These rather unusual projects were undertaken in an era when nonconformity could ruin a metropolitan's career, an atmosphere Kaloktenes must have sensed. The two councils he attended in Constantinople in 1166 and 1170 were aimed at suppressing deviations from beliefs that the emperor considered orthodox. Kaloktenes himself certainly witnessed the condemnation of George, metropolitan of Nicaea, and Constantine, metropolitan of Corfu, who expressed opposition to established dogma. 51 In addition, Kaloktenes himself may have been condemned for nonconformity. His above-mentioned appointment of bishops was effected without the permission of the Great Synod of Constantinople. For this reason, a synod in the 1170s concluded that the appointment went against canonical provisions, although the penalty Kaloktenes received remains unclear. 52 Under these circumstances, Kaloktenes' rather exceptional projects in Thebes would have made him a bold reformer.
Interpreting the initiatives
To summarise what we have observed so far: by the time Kaloktenes became the metropolitan of Thebes, the Theban silk industry, the city's economic engine, had recovered from the destruction inflicted by the Normans and was on track to reach its acme by the end of his tenure. At about the same time that his counterpart in Athens was committed to building up the diocesan economy, and the metropolitan of Naupaktos was a leading promoter of the local silk industry, Kaloktenes, an innovative and daring figure, introduced a series of unconventional initiatives in Thebes. Although the motivations behind these initiatives are not specified in the records, context leads us to the hypothesis that these initiatives may have stemmed from Kaloktenes' concern for the economy of his diocese, especially its silk industry. The hypothesis appears tenable if we delve into the possible connections between these initiatives and the silk industry.
The aqueduct
The aqueduct is unlikely to have been intended to serve agricultural needs, which undoubtedly would have called for substantial water investment. Its remains, consisting of twenty arches, were identified outside the southern end of the Kadmeia. 53 The location indicates that the aqueduct was designed to introduce water into the Kadmeia, which was certainly not an intensively cultivated area in Kaloktenes' diocese. 54 The increasing population of Thebes, as implied by various evidence, 55 may have boosted the city's overall water consumption. However, the relatively small size of its population makes it unlikely that the population was the project's primary stimulus. Instead, the aqueduct might well have been introduced to meet the need of the silk industry. The Kadmeia must have contained a significant part of the silk-processing workshops in Thebes, some of which have been identified through their archaeological remains. 56 The operation of these workshops would have required large quantities of water. Records describing the region's contemporary silk industrial practice suggest that the cocoons were boiled in hot water before being processed into threads. 57 Comparable details of the dyeing process have not yet been found in Byzantine sources, but we can reasonably assume that it must have also included water 53 AD 3 (1917) 123, n. 2. The remains were demolished by the early twentieth century. use in various stages given what we know about the better-attested practice in antiquity. 58 The local silk industry's expansion would have resulted in a dramatic surge in demand for water in Kadmeia, which seems to be the most compelling reason for Kaloktenes' introduction of an aqueduct.
The Conversion
Kaloktenes' conversion of a male monastery into the convent of Dekane needs to be examined in context. On average, a major Byzantine provincial centre (and this includes Athens) maintained two convents at most in any given period. 59 In Epiros in 1224/5, the lack of convents forced nuns to stay 'in the forecourts of churches in dilapidated shacks which had space only for broken-down beds.' 60 In contrast, during the tenure of Kaloktenes, Thebes, despite its much smaller population compared to Athens and Naupaktos, 61 seems to have maintained a disproportionately large number of convents. We are informed by Apokaukos that when Kaloktenes inaugurated the convent of Dekane, there were already 'many (πολλῶν) other female convents in Thebes.' 62 If we take this 'many' as at least three and add the convent of Dekane, Thebes then hosted at least four convents. Archaeological evidence implies that there might have been a surge in the number of monasteries in Thebes around the same period: among the six Theban monasteries identified through their remains, five have been dated to the twelfth and early thirteenth centuries. 63 The substantial number of convents Apokaukos implied must have constituted a part of the picture. We do not know if there were other convents founded under Kaloktenes' direct patronage, but by converting a male monastery into a convent, he had contributed to an apparent expansion of convents in Thebes.
Could the coincidence of such an expansion with a thriving silk industry point to a correlation? The hypothesis may seem plausible if we clarify the possible connections between the two developments. The silk industry's success, as we have mentioned, relied upon its superior execution of weaving, which seems to have been conducted primarily by women. John Tzetzes attributed the delicacy of his Theban silk textile to the Theban women's incomparable weaving skills. 64 According to Niketas Choniates, the people taken away by Norman raiders in 1147 were Theban female weavers. 65 Three decades later, when the French poet Chrétien de Troyes described a working scene of three hundred captive silk weavers, which has been convincingly demonstrated as depicting the abducted Theban artisans, 66 he also specified that they were all women. 67 That the Theban weavers' gender was highlighted is not surprising: Byzantine sources tend to present weaving as women's domain. 68 The suggestion is unlikely an unrealistic literary topos; 69 ethnographic studies have suggested that women dominating textile weaving is common worldwide. 70 Following this vein, we may conclude that supporting women in weaving work must have been the key to promoting the Theban silk industry.
Here we need to bring forward how the female artisans of silk textiles were organised in Thebes. The sources are not informative in this regard, and we have to resort to educated guesswork. We have mentioned historians' speculation favouring aristocrat-sponsored workshops. What has been neglected in the current scholarship is the possible involvement of female religious foundations. By late antiquity, female ascetics in religious foundations were frequently attested as working in textile production and generating income by selling the surplus. 71 From the early twelfth century on, we are informed by the extant typika of convents that the handiwork of nuns, an essential part of their daily lives, was predominantly related to textile manufacturing. 72 For those 'labouring' nuns, such work was intensive and conducted under close supervision. The monastic institution controlled the means of production and absorbed all the products, forming a system of labour exploitation. 73 The nuns' products were not limited to cheap textiles for everyday use but must also have included luxury items. Theodore Balsamon implied that in his contemporary kelliotic convents, the predominant type of convents at that time, 74 garments made of silks and adorned with gold and stones were used in nuns' induction rites. 75 His record suggests that in the second half of the twelfth century, the skill of producing luxurious silk textiles must have been highly valued in most of the convents within the empire. Given that the induction was only one of the many rites performed in convents that might have included the use of high-end textiles, nuns must have extensively engaged in producing such textiles to meet demand.
More importantly, the nuns' handiwork could be market-oriented. The typikon of the convent of Christ Philanthropos in Constantinople (dated c.1307) forbade nuns from doing their own private handiwork and acting as businesswomen, 76 indicating the prevalence of nuns selling their products themselves. The typikon of the convent of the Pantanassa at Baionaia (dated c.1400) also implied that the works of nuns could bring them income presumably through the sale of their products. 77 In our research period, the commercialisation of product surplus in major monastic centres like Patmos and Athos is attested as prevalent. 78 Around Thebes, convents exploring ways to gain additional revenue was certainly not rare. 79 The reform movement, led by the prominent monastic founder Meletios the Younger (fl. c.1050-c.1105), advocating the rejection of both private and communal possessions was probably a backlash to monastic communities' pursuit of secular profits. 80 In cities with a surging silk industry like Thebes, convents embraced the opportunities and engagement in market-oriented work is attested. 81 In this context, we may suppose that for Theban convents without significant means, producing and selling the textile surplus from nuns' everyday work could have emerged as a common way to sustain their maintenance. The convent Kaloktenes converted, like many of its counterparts likely to have been founded in Thebes around the same period, may have helped accommodate more women active in silk production. If aristocrat-sponsored workshops were indeed an organisational form of the Theban silk industry, founding convents were certainly a much more logical and feasible alternative, given Kaloktenes' capacity as a metropolitan, for him to meet his diocese's industrial demand.
The parthenon
As we have already noted, the parthenon, a type of foundation almost abandoned in the empire by Kaloktenes' time, accommodated women consecrated as virgins. According to Balsamon, the virgins dedicated themselves to God and disavowed marriage, following the practice of the consecration of virgins. 82 They resembled ascetics but were considered laywomen, retaining features distinguishing them from nuns: they neither bore monastic habit and tonsure nor took monastic vows. 83 Thus, the foundation of the Theban parthenon can be seen as a revival of a Christian tradition that had fallen into desuetude.
The hypothetical relevance between the parthenon and the Theban silk industry can be understood, first of all, through its similarity to the convent. Both the parthenon and the convent derived from the same widely documented model of ascetic community, in which residents were involved in textile production and profiting from selling the surplus. 84 Although later developments separated the two forms of community, 85 their members must have retained similar day-to-day practices: Balsamon found it necessary to reiterate in his scholia the virgins' identity and their differences with nuns. 86 The virgins must have engaged in textile-related manual labours in the parthenon as nuns did in the convent. Textile products from the Theban parthenon could have also been easily adapted to market needs as those from the Theban convents possibly were.
More importantly, the parthenon is more likely to have been active in the silk industry than a convent. From a comparative perspective, this point can be illustrated through the parthenon's resemblance to the beguinage in the contemporary Latin West. A beguinage, first recorded in 1230 in Aachen, is a foundation to accommodate beguines, women who led a life of devotion but maintained a lay identity without taking solemn vows as nuns did. 87 Originated in the southern Low Countries in the late twelfth century, the beguine movement spread across Europe and reached its peak in the late thirteenth century. Most beguines worked in the textile industry, including silk production, 88 much more extensively than traditional nuns, whose work was mainly confined to embroidering vestments or manufacturing tapestries for religious use. 89 Beguinages also provided the expanding textile industry with a cheap and flexible labour force by attracting rural labour to cities. 90 The revival of the parthenon in Thebes and the beguine movement are comparable in the sense that they were contemporary socio-religious developments in growing urban centres with thriving textile industries. Furthermore, the virgins in the parthenon shared striking similarities with the beguines as a distinct religious group from nuns: both were laywomen who did not take monastic vows like nuns and were bound only by a vow of chastity. Thus, theoretically, the Theban parthenon may have supported the local textile industry as beguinages did. 91 In addition, compared with the convent, the parthenon maintained distinctive features that could much better have accommodated industrial needs. We have mentioned that the virgins differed from nuns in monastic habits, tonsure, and vows. The habit and tonsure made a woman visibly a nun and allowed others to supervise and bear witness for her. 92 Therefore, she had to be determined enough to bear the discipline her actions might incur. In contemporary ecclesiastical writings, assuming the habit signified abandonment of the worldly life, 93 the tonsure the renunciation of her previous possessions. 94 The monastic vows of a nun essentially included those of poverty (to renounce the world and what was in it), obedience (to endure all the difficulties and tribulations of monastic life until death), and chastity (to retain virginity). 95 In the context of the twelfth century, tonsure and the vow of poverty together would have compelled her to forswear the possessions she acquired both before and after embracing the monastic life. 96 By contrast, none of the above characterised a virgin in the parthenon. She was only obliged to keep a vow of chastity which was certainly less binding: the vow was not the public (ἐναργής) vow nuns took, but more likely an informal or tacit (σιωπώμενον) vow. 97 Such differences would have rendered the parthenon much more attractive than the convent to textile artisans. For ordinary women around Thebes, to abandon worldly life like a nun must have called for some determination. In late eleventh-century Phokis, the brother of Nicholas the pilgrim would respond to Nicholas' repeated invitations to renounce the worldly life with contempt and stern rejections. 98 As a less rigid and disruptive alternative which still offered the possibility of becoming a nun and guaranteed a life not inferior to that of a nun, 99 life as a consecrated virgin may well have been more suitable for those seeking a supportive community to settle in but feeling unprepared to embrace the challenges of monastic life. Women who had trained as textile artisans out of worldly considerations and who did not opt for monastic life from the outset were likely to have belonged to this category. Furthermore, virgins may have been more motivated to improve their productivity and skills. Although nuns selling their products would have generated income, as we have noted, bound by the vow of poverty, they were not allowed to own possessions, at least beyond everyday necessity. Their avowed commitment to renouncing the world also severely restricted their spending the output on their relatives. 100 In the case of the virgins operating without such restrictions, their output could have benefited themselves or their relatives directly, leading to stronger motivation. This may have been essential, considering that textile crafts like weaving took years to master; 101 for making luxurious silk textiles, the training process could only have been more rigorous. Finally, products from the parthenon could be better commercialised than those from the convent. Trading by monastics was considered a sin of acquiring possessions and an engagement with the secular world they had renounced. 102 Thus, the Pantanassa typikon, the only extant typikon of a convent that provides rules on nuns' commercial behaviour, enforced close surveillance when trading took place. 103 Moral and practical constraints like these did not apply to the parthenon's virgins.
The parthenon also differed from the convent in the members' way of life. Contemporary monasteries around Thebes seem predominantly to have adopted a kelliotic system, in which each monk or nun had a separate cell. The monastery of Hosios Meletios, founded by Meletios the Younger on the border of Boeotia and Attica, consisted of the great lavra (μεγάλη λαύρα) and the nearby paralavria (παραλαύρια). The paralavria was implied to comprise secluded cells, an arrangement most likely adopted also by the great lavra, as indicated by the root lavr-they share. 104 Monks in Hosios Meletios probably resided in their own individual cells. In a monastery primarily modelled upon Hosios Meletios in Areia (east of Nauplion, Peloponnese), each monk seems to have had his own cell. 105 We may assume that other nearby monasteries founded by Meletios and his disciples probably adopted the same arrangement. 106 In kelliotic monasteries, handiwork was carried out mainly in individual cells rather than defined workshops. The typikon of the convent of Bebaia Elpis in Constantinople (dated 1327-35) stipulated that in a nun's spare time, she should stay in her own cell praying, reciting or reading the psalms, or performing textile-related manual labour. 107 Closer to Thebes, the typikon of the monastery in Areia provided that each monk should go to his own cell and engage in handiwork after meals. 108 The virgins in the parthenon lived differently. According to Balsamon, they ate and slept in the same place. 109 In other words, instead of occupying their own individual cells, the virgins shared a dormitory. Such a monastic lifestyle must have been rare at that time: as Balsamon describes it as almost abandoned and only preserved in cenobitic convents and Latin monasteries. 110 The typikon of one such convent fortunately survives to the present: that of the Kecharitomene (dated 1110-16). The typikon commanded the nuns to sleep in dormitories. This arrangement was intended to make the residents visible to one another so that the indolent might imitate the more industrious in virtue and good works. The typikon implied that an ideal dormitory was a room consisting of two sections: one for sleep and the other for handiwork. The handiwork was managed and supervised by the abbess herself. To ease the toil while working, one of the nuns would read a portion of the Scriptures chosen by the abbess. 111 The c.900 vita of St Theodora of Thessalonike (812-92) provides additional details about such a dormitory system through the example of St Stephen's convent in Thessalonike. 112 The vita implies that the nuns in the convent slept together in a dormitory, and that each was assigned a sleeping location that was not subject to change without the superior's consent. The dormitory must also have been equipped with tables, benches and a furnace, given that meals were transferred from the refectory to the dormitory during cold winters. Although not specified in the vita, the dormitory probably also functioned as the place for nuns' handiwork, as in the convent of the Kecharitomene. This must have been the case during cold winters: the tables and benches may also have been used for dining and handiwork interchangeably. Moreover, the vita implies that handiwork was done in a communal setting. 113 Compared to the kelliotic system, the dormitory system the Theban parthenon adopted offered substantial benefits to textile production, especially weaving. Under the kelliotic system where the residents' handiwork was performed in individual cells, supervision was difficult, and jobbery seems to have occurred frequently. 114 However, in the dormitory system, since a collective space for both sleeping and manual labour was available, the superior could implement effective supervision, as we have seen in the Kecharitomene typikon. Secondly, group work, which was critical to textile production, could be carried out within the dormitory system. To take weaving as an example, ethnological studies have shown that group work was necessary to weave on the predominant type of loom at that time. 115 In the dormitory of the convent of St Stephen, operating a loom involved at least two nuns. 116 Thirdly, evidence from eleventh-century Constantinople suggests that female textile artisans learned their crafts from senior artisans through rigorous training. 117 The dormitory system of the parthenon would have facilitated such training: virgins were traditionally entrusted to senior virgins who would supervise them while living together, 118 and indolent residents could have been effectively motivated, as we have seen in the Kecharitomene typikon. Finally, the same typikon also suggests that the dormitory system made it possible to employ a certain kind of toil-alleviating strategy like listening to the reading of Scriptures. Its efficacy can be compared to listening to music in weaving practices like the one adopted by the Theban artisans, which 'gives an emotional impetus and helps to keep a quick but steady pace in work that is characterised by its repetition and slowness.' 119 We may conclude by saying that Kaloktenes' seemingly aberrant revival of the parthenon in Thebes can be interpreted as an intentional step to introduce a more productive model of monastic organisation to promote the silk industry in his diocese.
A comparison with the beguinage shows that the parthenon's affinity with the Theban silk industry is theoretically plausible. More specifically, if we assume that the female monastic foundations in Thebes were indeed involved in market-oriented textile production, the parthenon would satisfy industrial needs much better than the convent. The less binding lifestyle of the parthenon-based virgins could be much more attractive to women who had been trained for textile-related crafts. Moreover, the virgins could be more motivated to improve their efficiency and skills; their products could also be better commercialised. Besides, the parthenon's dormitory system could make way for effective supervision and group work. A more efficient professional tutoring and a certain kind of toil-alleviating organisation were also available in this system.
Concluding remarks
Though the evidence we have is rather slim and mostly conjectural, our analysis of what little is there demonstrates that the unusual projects John Kaloktenes spearheaded in Thebes can be reasonably interpreted as designs to promote the city's silk industry. The aqueduct responded to the increasing demand for water required in silk processing. The conversion of a male monastery into a convent allowed more women to be incorporated into a workshop-like environment expedient for market-oriented silk production. The revival of the parthenon could have been a de facto institutional renovation of the rather rigid convent system to better serve the industry: it not only preserved the convent's advantageous framework but also retained distinctive features much more compatible with industrial needs. Despite the hypothetical nature of the above interpretation at the current stage, hopefully it will inspire future work in order to prove or disprove it. From the archaeological perspective, perhaps we can expect to find remains of weaving activities found in monastic contexts around Thebes as we have seen elsewhere. 120 Sources corroborating aspects of Kaloktenes' metropolitan tenure are also possible. For example, if Kaloktenes indeed engaged in Christianizing Jews as his modern vita and akolouthia suggest, 121 given the Theban Jewish community's large size and attested involvement in the silk manufacture, the possible relevance between the Christianisation and the silk industry will also be worthwhile delving into. In any case, what we have now learned through the case studies of contemporary Athens, Naupaktos, and Thebes suffices to show that metropolitans' roles in the silk industry in Western Byzantium certainly merits further attention. | 7,504 | 2021-12-14T00:00:00.000 | [
"History",
"Economics"
] |
Reliability studies in the determination of quantitative covalent fixation of reactive dyes on cellulose
The accuracy in the determination of the fixed proportion of two hetero bi-functional reactive azo-dyes on cotton cellulose was studied in the present paper. One direct and two indirect (indirect I and indirect II) analytical methods were used in the experiments. ANOVA (Analysis of Variance) statistical method was used to evaluate the precision of the measurements. The most reliable results could be achieved by using the indirect I method, while the indirect II method was much inferior. The use of direct method, being definitely more complicated than any of the indirect ones, produced least accuracy.
Introduction
About sixty per cent of cellulose textile products are dyed with reactive dyes.The fact that 1150 different reactive dyes have been registered in the Colour Index and this number has been increased by 23 new dyes yearly, demonstrates the mentioned importance [1].
The hydrolysis of reactive dyes simultaneously with their binding by cellulose could not be overcome completely so far [2].To reduce water pollution and to improve the economy of dyeing by means of increase in the proportion of the fixed dye content has been common interest in dye-houses.To elaborate appropriate dyeing technologies with new types of high fixation reactive dyes are international targets of R&D activities.This tendency generated the idea to analyse and to compare the accuracy of the most frequently used analytical methods for the determination of dye fixation.
All related published methods are well known also in details.
In certain cases dyed samples are dissolved in concentrated sulphuric acid for the evaluation of their dye contents in the reactive dyeing.The absorbance of the obtained solution has been measured subsequent to its dilution by distilled water.The fixed dye content in the dyeing is calculated by relating this absorbance to the initial concentration (through absorbance) of the dye-bath [3].This procedure is mentioned in this paper as the "Direct Method".
The "Indirect I method" for the calculation of the fixed dye content is based on the measurement of remaining dye content of the dye-bath together with that of the rinsing solutions [4].
"Indirect II" method combines the reflexion measurement of the sample with the absorption measurement of the dye-bath, resulting in K/S values [5].
The purpose of our work was to determine precision of three selected methods for measuring the fixed dye content on cotton fabric dyed with heterobifunctional reactive dyes [6].
To compare precision of the studied methods the analysis of variance (ANOVA) technique was chosen [7].
Materials and equipments
Bleached and mercerized cotton fabric (surface density: 109 g/m 2 ) Two heterobifunctional reactive azo dyes (abbreviated codes B and C) (Table 1).
Dyeing experiments
Cotton fabric samples (5 g each) have been dyed in three days, three repeated samples per day with two reactive dyes separately (B and C) in two nominal dye concentrations (0.6 g dye/100g fabric and 3.0 g dye/100g fabric, liquor ratio= 1:50).The dyeing procedure is shown by Fig. 1.
18 B dyeings and 18 C dyeings have been obtained thereafter.The levelness of the dyeings had to be checked one by one.Colour difference was tested at five selected spots in each dyed sample.The obtained within-sample colour difference in each case was negligible ( E * ab <0.3).Rinsing in 250 ml different liquids by the following treatments: 1 distilled water at ambient temperature for 5 minutes 2 acetic acid solution (pH=5,5) at 50 ˚C for 5 minutes 3 distilled water at 90 ˚C for 5 minutes 4 distilled water at 95 ˚C for 5 minutes 5 distilled water at 50 ˚C for 5 minutes 6 distilled water at ambient temperature for 5 minutes.
2.2.2
Methods for the determination of the fixed dye content 2.2.2.1.Direct method 0.1 g undyed fabric (conditioned before weighting at 65% relative humidity and 21˚C) was dissolved in 10 ml concentrated sulphuric acid at 0˚C. 7 dye solutions (from 5 • 10 −3 g/l through 6 • 10 −2 g/l) were prepared in 10 ml distilled water from both dyes separately.The concentrated sulphuric acid solutions was mixed very carefully with the respective prepared dye solution prior to their dilution to 25 ml by distilled water.Absorbance vs. concentration calibration curve was constructed for both dyes.
To determine the actual dye content of samples 0.1 g dyed fabric (from 5±0,01 g dyed sample) was dissolved in 10 ml concentrated sulphuric acid at 0˚C.This solution was very carefully poured into 10 ml distilled water and the mixture filled up to 25 ml by distilled water.The same procedure was followed with the undyed fabric sample.The dye content of the studied sample was determined through the difference of absorbances measured for the dyed and undyed samples (dyed A d , white A w , respectively) (Eq.1).
where A f is the absorbance attributed to the covalently fixed dye content of the studied fabric.Commercial dyes contain not only the pure dye compound.This was considered by using the calibration function obtained with the same dye product.The fixed dye content of the samples was recalculated for 5 g.
Indirect I method
After dyeing the exhausted dye bath and the rinsing liquors were united before diluting the mixture to 2000 ml by distilled water.The fixed dye concentration was calculated from the absorbance of this mixture (A u f ) and that of the initial bath (A 0 ) (Eq. 2).
Indirect II method
This method is based upon the combined application of absorption and reflection spectra.The absorbance of the dye bath has to be measured prior to (A p ) and subsequent to (A s ) the dyeing procedure.The dyed fabric sample was taken out from the dye bath and after squeezing and drying its reflection spectrum was measured (R 1 ).The same dyed fabric was washed carefully with distilled water and dried at 105˚C and its reflection spectrum was also determined (R 2 ).The respective K/S values have been calculated by means of the Kubelka-Munk equation (Eq.3).
From the K/S values obtained the fixed dye content was calculated as follows (Eq.4):
Evaluation of the determination methods of the fixed dye content through analysis of variance
The effect of factors influencing the experimental results and the random error of measurement has been assessed by analysis of variance.
The conditions for applying the analysis of variance method are: • random fluctuation of residuals (difference between the measured and calculated values) around zero, checked graphically, • normal distribution of the residuals, checked by Normal probability plot, • and homogeneous variances.
Per. Pol.Chem.Eng.The following model has been used (Eq.5): where: is the dye proportion obtained on the i-th day, with j-th concentration, k-th repeated dyeing, and l-th repeated measurement, µ is the mean value of the fixed proportion, α i is the effect of the i-th day, B j is the effect of the j-th dye concentration, αB i j is the interaction between the i-th day and j-th dye concentration, γ k is the effect of the k-th repeated dyeing ε l(i jk) is the (analytical) measurement error in the l-th repeated analysis.
The dyeing experiment was performed under the following conditions: i = 1, 2, 3 days, j=1,2 dye concentrations and the dyeing experiment was repeated 3 times (k=1,2,3).In the direct method there were not any repeated chemical analysis, thus l ≡1, while in the indirect methods 3 repetitions have been performed (l=1,2,3).
The concentration was evaluated at 5 different locations on the textile in the indirect II method and this procedure has established a further random factor.
Consequently the model has been modified as follows: is the effect of the l-th location, random factor, reflecting the inhomogeneity of the fixed dye content along the fabric, ε m(i jk) is the (analytical) measurement error in the m-th repeated analysis.
The Statistica 7.0 software was used for mathematical analysis.
Results and discussion
No difference could be distinguished between B and C dyeings according to the ANOVA calculations.Consequently detailed discussion follows only for the C dyeings while only the box-plot will be evaluated in details for the B dyeings.
For dye C the residuals (difference between the measured and calculated values) were checked for all 3 methods.The residuals scatter around zero without any systematic behaviour.
The Normal probability plot of residuals is shown in Fig. 2 for C dyeings.
The residuals in the plot scatter around a straight line, thus the normal distribution of residuals can be accepted.All conditions of the ANOVA have been fulfilled; consequently its application has been justified.Where MS mean square, p probability value (if it is small, the null hypothesis of lack of effect is rejected), DF degree of freedom, depends on the number of measurements and the number of levels of factors F the F test statistic from the ratio of mean squares.From the results (Table 2) it can be concluded that none of the factors has significant effect at 0.05 level.The variance of the measurement was 65.9 % for the direct method, thus the standard deviation of measurement error can be estimated as 8 %.
The corresponding value for the B dyeing has been 5.2 %.The day factor (α i ) has no significant effect on the calculated results as p > 0.05 whereas, the impact of the nominal dye concentration proved to be significant ( p < 0.05) (Table 3).Where MS mean square, p probability value (if it is small, the null hypothesis of lack of effect is rejected), DF degrees of freedom, depends on the number of measurements and the number of levels of factors F the F test statistic from the ratio of mean squares.
The estimated variance of repetitions was 0.205 %, and this included both the fluctuation due to replicated dyeing and to the repeated chemical analysis, thus the standard deviation of measurement error could be estimated as 0.453 %.
The corresponding value for the B dyeing has been 0.52 %.
The dye concentration has significant impact on the experimental data ( p < 0.05) also in case of indirect II method (Table 4).It is found that the use of reflectance does not give reliable results at high K/S values.
The estimated variance for differences among the measured data has been 0.305 % 2 , thus the standard deviation of measurement error can be estimated as 0.552 %.
The corresponding value has been 0.344 % for the B dyeing.
The nature of variation of the measurement results is well visualized in the median-quartile box-plot.The small rectangle in the middle of the box is the median, the edges of the box are the quartiles, the whiskers show the range of data.This kind of plot may uncover extent and asymmetry of the distribution as well.The above mentioned data are shown in Figs.3a, 3b and 3c for
Conclusions
The dependence of the fixed dye content on two dye concentrations and the respective method dependent variance components are shown for B and C dyes in Table 5.
Nearly equal fixed dye content has been found by the three methods.The fixed dye content proved to be independent of the nominal dye concentration in the selected range probably therefore because even the higher dye concentration was not close to saturation.
The variance component values however are different for the three methods.
The most precise was the indirect I method.Less favourable was the indirect II method and the worst one was the direct method.The last one was in addition the most complicated and unfavourable in all respects.
For the explanation of the deviation in the fixed dye content between the hetero bifunctional reactive B and C dyes the difference in chromophore structure might be most probable.
Fig. 2 .
Fig. 2. The probability plot of residuals for C dyeings calculated by the data obtained by (a) direct method, (b) indirect I method and (c) indirect II method
Fig. 4 .
Fig. 4. Median-quartile box-plots for C dye (a: direct method, b: indirect I method, c: indirect II method) Tab. 2. Simplified ANOVA table for the evaluation of the direct method for C dyeing Tab. 3. Simplified ANOVA table for the evaluation of the indirect I method for C dyeing if it is small, the null hypothesis of lack of effect is rejected), DF degree of freedom, depends on the number of measurements and the number of levels of factors F the F test statistic from the ratio of mean squares.Simplified ANOVA table for the evaluation of the indirect II method for C dyeing B dye and on Figs.4a, 4b and 4c for C dye.The values situated beyond the whiskers have to be considered as outliers (extremes are statistically not valuable abnormal data). | 2,927.4 | 2008-01-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
The Utilizing Hall Effect-Based Current Sensor ACS712 for TRUE RMS Current Measurement in Power Electronic Systems
Current measurement in power electronic systems is a necessary part of the measurement process. There are different ways for current measurement like using current transformers or using the Rogowski coils which are not precise enough in many applications and not suitable for use in power electronic measurement systems. For that reason, the Hall effect-based sensor can be used as a very precise alternative with minimum external components. Presented in this paper is utilizing the Hall effect current sensor ACS712 for current measurement with a microcontroller system. The measurement with the Hall effect sensor is described with an appropriate comparison of the measurement values on a microcontroller system and multimeter and high precision power analyzer Chauvin Arnoux 8335.
Introduction
ALL effect current sensors are a very important type of devices in electrical measurement in general. The working principle of current sensors is based on the conversion of a physical quantity into an electrical one. Current transformers, magnetoresistance, and Hall effect sensors are the most common current sensors [1]. Magnetic devices, which assist in the measurement of current, are current transformers. The operating frequency of current transformers in electrical circuits is typically 50 to 60 Hz but they can work also from 25Hz to 400Hz. For high precision measurement transformation accuracy is 0.1% or better. With current transformers it is possible to measure the current levels from less than an amp to thousands of amperes or more. For general indication or overload, detection accuracy is several percent. [2].
Linear magnetic field transducers based either on the intrinsic magnetoresistance of the ferromagnetic material (sensors based on the spontaneous resistance anisotropy in 3d ferromagnetic alloys are also called anisotropic magnetoresistance (AMR) sensors) are magnetoresistive (MR) sensors. Another name for magnetoresistive sensors is ferromagnetic/non-magnetic heterostructures. This includes giant magnetoresistance multilayers, spin valves, and tunneling magnetoresistance devices [3].
The phenomenon of charge flow deflection in the metal plate that is placed in the magnetic field is known as the Hall effect. The current flow causes the voltage difference between the plate side called Hall potential. The sensor which is designed by using the Hall effect principle to detect the magnetic object is a Hall effect sensor. Eq. 1 shows that the magnetic field B which is placed perpendicular to the metal plate (conductor or semiconductor) will give deflection force F (Lorentz force): where the force is found to be the right side. The Lorentz force of the charge is given by the following equation 2: Lorentz force causes the charge deflection in the vertical direction with charge velocity direction and magnetic field direction. Because of that, on the plate side which is parallel to the current direction, a different voltage will occur. All the Hall effect devices are activated by a magnetic field. In the current sensor application which includes Hall effect sensors, a magnetic field is generated by electric current and it represents the Hall effect [4]. The phenomena of the Hall effect in the metal plate due to the presence of the magnetic field are shown in Fig.1. [4] H For measuring the potential difference -voltage, between two sides of the plate, a voltmeter can be used. The Hall voltage can be presented as V H given by the formula 3: Where: -I is the flowing current; -B is the magnetic induction; q is the charge; n present the number of charge carriers per unit volume; d is the sensor's thickness; Thus, the effect that is seen is the Hall Effect. So, if the magnetic field is stronger, more electrons will be accumulated. That means if the current is higher, more electrons can be deflected. With the increase in the number of electrons that can be deflected, the potential difference between the two sides of the plate will be greater. Based on this it follows that the Hall voltage is in direct proportion to the electric current and the applied magnetic field [5].
The introduction of the paper has presented an explanation and basic working principle of the Hall effect sensor. Further, the text will present the application of the Hall effect-based sensor ACS712 in a microcontroller system with experimental results in laboratory and current measurement using Chauvin Arnoux power analyzer 8335.
Hall-effect current sensor ACS712
Hall effect sensor ACS712 manufactured by Allegro consists of a precise, low-offset, linear Hall sensor circuit with a copper conduction path which is located near the surface of the chip. The magnetic field which is generated by flowing current through this copper conduction path is sensed by the integrated Hall IC and also converted into a proportional voltage. The accuracy of the device is optimized through the proximity of the magnetic signal to the Hall transducer. The internal resistance of this copper conductor is a typical 1.2 mΩ. The sensor is packed in a small, surface mount SOIC8 package. The output rise time in response to step input current is 5 μs. ACS712 typical application circuit is shown in Fig.2. [6]. [6] This sensor can be used for measuring direct current (DC) and alternate current (AC). It has a low noise level and low error which is 1,5% to TA = 25°C, and 4% to -40°C until 85°C [7].
This research used a version of the ACS712 current sensor which can measure 30A current. This type of sensor has a sensitivity of 66mV/A. Mean Total Output Error as a function of the Ambient Temperature and Output Voltage as a function of the Sensed Current are shown in Figures 3 and 4, respectively [8].
True RMS current measurement
Measuring the RMS or Root Mean Square values of the signal is one of the fundamental measurements of the magnitude of AC signal values. RMS voltage or current measurement is done by utilizing RMS converters. In an ideal case, the RMS converter could measure the RMS value of the input signal independently of the signal amplitude, frequency, or wave shape. For AC to DC and DC to AC conversion the power electronic switching devices are most often used. With the true RMS measurement of electrical system parameters, it is possible to read the true magnitude of current and voltage. This method of measurement is very helpful for precise loss estimation. Measuring instruments for measurement of AC electrical values can be classified as rectifying and average type, analog-computing type, thermal type, and computational type, which are based on a sampling technique. The Rectifyand-average measurement gives an increasing error as the input departs from sinusoidal and it is enough precisely for sine-wave signals only. Bandwidth limitations are typical for the analog computing type of measurement. The thermal type of the measuring provides high precision, high bandwidth rate, and high design complexity with cost constraints. The most widely used digital sampling technique-based measurement is digitizing using an AD converter. Calculation of the RMS value of the signal is done by using a DSP processor or microcontroller. Depending on the A/D Converter precision and sampling rate it will depend on achievable accuracy for any given bandwidth.
Normally, DSP processors and a high-speed A/D Converter with high precision are used for True RMS measurement which is done by using the digital sampling technique. Because of using a unipolar A/D Converter and the bipolar input signal, it is necessary to shift the reference level of the bipolar input for DC offset which is equal (Vref/2).
This voltage shifting provides that the bipolar input signal changes its value about the reference level which is Vref/2 instead of ground level. This configuration provides that the peak-to-peak value of the input signal is confined in the ground to the Vref value. The DC offset is subtracted from the input signal after digitization. For this method an accurate DC level shifter circuit is necessary. A small variation in the DC offset gives an error in the measurement. With this technique, it is possible to reduce the effective precision of the A/D Converter. The peak-to-peak variation of the input signal is from Ground to Vcc [9]. The true RMS current calculation in the recommended method is based on equation 3: RMS measurement is not reliable, because in electrical circuits there is a noise in the sine wave signal (caused by various motors, switches, devices, cheap power supplies, etc.), and thus there is a distortion of the sine wave signal, and ultimately leads to large measurement errors.
True RMS current measurement is a very useful method for current measuring in systems that are non-sinusoidal shapes of signals, typical in power electronic systems in which we have thyristor and triac regulation or similar types of components and circuitry. True RMS measurement is necessary for motor drives with variable speed, electronic ballasts, personal computers, HVAC, and solid-state devices [9].
Practical implementation of ACS712 with microcontroller system
A current sensor that is used for this experiment is a module type on a printed circuit board with two filter capacitors and it is supplied by a 5V power supply from buck regulator MP2315 [10] which is integrated with a microcontroller in the case. In this module two wires for easier connection of different types of electrical loads are soldered. ACS712's current sensor module is shown in Fig.5. The microcontroller system which is used for this research is based on Microchip 8-Bit AVR microcontroller ATMega328P [11] and it is programmed in the MPLAB IDE development environment. The hardware part of the system is designed in Altium Designer. Fig.6 shows a microcontroller system. This device also has an RS232 and CAN Bus interface so it can be connected to a PC or other device which can be done by archiving the data from the sensor.
Microcontroller ATMega328P has a 10-Bit ADC and, for this purpose, it is used for measuring voltage from the current sensor. The sensitivity of the sensor is 66mV/A and it has a linear characteristic. In this case, it is used for measuring AC. Measuring voltage on the sensor must firstly be presented as a digital value by formula 5: Where: -Vsens is the voltage on the output of the current sensor; -Vbin is a binary representation of a voltage on the sensor output -Voffset is a value of the voltage on the sensor output and the input current is 0A. In this case, that voltage is 2.5V. Because the sensitivity of 66mV/A current on the sensor can be calculated as presented by formula 6 : The true RMS value of the current is calculated as shown in formula 7, which is also implemented in a program for the microcontroller. True RMS calculating is implemented in 200 steps.
The mean measurement error is 17.44% and it is calculated by the mean value of error. This value of current measurement with the non-calibrated sensor is useless and calibration was done by multiplying results with a calibration constant of 0.8256 which is calculated from errors. In the next section, the complete measurement is improved so the maximum error is 3.43%, which is enough precision for this type of measurement. Fig.6. presents the connection between sensor ACS712 and the microcontroller. Table 1. presents the results of measurements with non-calibrated sensor ACS712 with the error between measurement of microcontroller system and high precision analyzer Chauvin Arnoux C.A 8335 for checking electrical values of the power network.
Experimental setup
The laboratory work includes using equipment for measuring electrical values and, in this case, it is AC. Connecting of all circuit devices is implemented on the insulation table in compliance with all the security high voltage measurements. Connecting of the devices is done following a connection block diagram. The Block diagram for connecting the measurement system is shown in Fig.7. Measuring is done in 4 steps.
The first step is connecting a load of 2000W to the sensor which is a hot air fan with thyristor regulation. It provides a voltage and current shape which is nonsinusoidal and it consists of the compound and mixed AC signal defined by on and off time of Triac phase regulation.
The second step implies connecting the fluorescent lamp to the sensor with the power of 22W which has integrated inductance so this load also provides a nonsinusoidal shape of current.
The third step includes connecting the heater with a power of 2000W with a low-power AC electromotor fan without temperature thyristor regulation. It contains only the thermal element.
And the final fourth step in the measurement is connecting the computer as a load with its Switch Mode Power Supply (SMPS) charger. The output power of the charger is 80W. To compare the results of the microcontroller measurement system in this work a high precision power and quality analyzer Chavoun Arnoux C.A 8335 is used, which is shown in Fig.8. with a current clamp and UNI-T multimeter. The results of measurement from the microcontroller system are given by using the RS232 serial interface which allows reading current values directly on a computer. Also, the values of the current measurements are presented on the display of the power analyzer so they are available enough for reading and comparing with the values from the microcontroller system as a current clamp. Fig.9. shows a connected instrumentation for this experiment with the ACS712 sensor which is connected using a block diagram.
Experimental results
All results in the process of measurement are given in the tables. There are 4 measurements. The first measurement covers a measurement of current consumption in a hot air fan dryer with a power of 2000W which is controlled by the triac and in this case, the distortion of voltage and current is most visible. With a microcontroller system as a high precision part of the equipment for this experiment the power analyzer Chavouin Arnoux C.A. 8335 and Chavouin Arnoux harmonic and power meter F27 current clamp are used. The results of the measurement are presented in Table 2. and Table 3. The third measurement was performed by using a heater with a power of 2000W as a load, FIRST FA-5568-2, without regulation so in this case, we have a resistive load if we disregard that we have small parasitic resistance and inductance in the circuit. The results of the measurement are given in Table 4. And the fourth measurement is done by using a power supply Fujitsu and a laptop of 80W. The measurement results are shown in Table 5.
Conclusion
This paper presents utilizing the Hall effect sensor ACS712 for the precise current measurement in power electronic systems. Also, this paper has presented the calibration of the sensor by using high precision power analyzer with measurement of the current through the sensor by using different electrical loads. Finally, the measurement error is presented, which is small enough for measurements in the field of power electronics, especially in motor control systems, and frequency regulators. | 3,449.6 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Analysis of mutational dynamics at the DMPK (CTG)n locus identifies saliva as a suitable DNA sample source for genetic analysis in myotonic dystrophy type 1
Genotype-to-phenotype correlation studies in myotonic dystrophy type 1 (DM1) have been confounded by the age-dependent, tissue-specific and expansion-biased features of somatic mosaicism of the expanded CTG repeat. Previously, we showed that by controlling for the confounding effects of somatic instability to estimate the progenitor allele CTG length in blood DNA, age at onset correlations could be significantly improved. To determine the suitability of saliva DNA as a source for genotyping, we used small pool-PCR to perform a detailed quantitative study of the somatic mutational dynamics of the CTG repeat in saliva and blood DNA from 40 DM1 patients. Notably, the modal allele length in saliva was only moderately higher in saliva and not as large as previously observed in most other tissues. The lower boundary of the allele distribution was also slightly higher in saliva than it was in blood DNA. However, the progenitor allele length estimated in blood explained more of the variation in age at onset than that estimated from saliva. Interestingly, although the modal allele length was slightly higher in saliva, the overall degree of somatic variation was typically lower than in blood DNA, revealing new insights into the tissue-specific dynamics of somatic mosaicism. These data indicate that saliva constitutes an accessible, non-invasive and suitable DNA sample source for performing genetic studies in DM1.
Introduction
Myotonic dystrophy type 1 (DM1) is the most common dominantly inherited myopathy in adults. It is a progressive and disabling disease that shows a highly variable phenotype, both in severity and clinical manifestations. The main symptoms include myotonia, muscle wasting and weakness, cardiac problems, cataracts, somnolence, cognitive dysfunction and behavioral PLOS juvenile-onset cases, two congenital-onset cases and one carrier subject who was asymptomatic at sampling. The DM1 population has already been well characterized and the age of onset has been previously recorded and reported [16,18]. Age of onset was based on the detection of physical myotonia (grip myotonia), muscle weakness and/or the presence of cataracts. Age of onset was recorded after clinical evaluation by one of four different experienced neurologists, or after an interview by the same neurologists or by one of two different experienced clinical geneticists.
For saliva collection, in order to increase the fraction of buccal epithelium cells recovered, the patients were requested to carefully wipe the inner side of their cheeks with their tongue and spit in a collection tube until obtaining~5 ml of saliva. Simultaneously, 10 ml of peripheral blood was drawn into EDTA-containing vacutainer tubes. DNA was isolated by proteinase K/ phenol-chloroform extraction and quantified by optical density at 260 nm in a NanoDrop spectrophotometer (Thermo Scientific, USA) and stored at -20˚C. The Scientific-Ethics Committee of the Universidad de Costa Rica approved the project. All samples were collected after obtaining written informed consent in accordance with the protocols approved by the Scientific-Ethics Committee of the Universidad de Costa Rica.
Molecular analysis
Measuring ePAL and degree of somatic instability. To estimate the PAL and determine the degree of SI in each sample, we used SP-PCR as previously described [15,16]. Briefly, for estimating PAL, we performed five reactions per sample with~200 to 300 pg of input DNA and the PCR products were hybridized with a (CTG) 66 radiolabeled probe. The PAL was estimated as the approximate lower boundary of the total allele distribution obtained for each sample [16,18].
In order to carry out a detailed quantitative analysis of somatic mosaicism, we used single molecule SP-PCR (using 10 to 70 pg of input DNA per reaction) to measure at least 50 single molecules per sample per patient. The degree of SI was defined as the difference between the 10 th and 90 th percentile of the total allele distribution as described previously [16,18]. SP-PCR products were detected by radioactive Southern blot hybridization and sized using UVIbandmap software (UVITEC, UK).
Screening for variant repeats. Previously described methods [27] were followed in order to identify the presence or absence of AciI sensitive variant repeats in the Costa Rican DM1 samples. Briefly, we carried out two PCRs per sample using 400 to 500 pg of input DNA followed by an AciI restriction digestion according to instructions provided by the manufacturer (New England Biolabs, USA). Through this approach, we were be able to exclude the most commonly observed CGG and CCG variant repeats within the CTG repeat expansion, but this does not exclude the presence of other variant repeats type in the samples analysed in this study. Digested and undigested PCR products were resolved by agarose gel electrophoresis and detected by Southern blot hybridization. A positive variant repeat sample was analysed in each experiment to confirm the presence or absence of variant repeats in the samples under investigation. The structure of the positive variant repeat allele is (CTG) 225 (CCG) 1 (CTG) 1 (CCG) 1 (CTG) 4 (CCG) 1 (CTG) 1 (CCG) 1 (CTG) 1 (CCG) 2 (CTG) 1 (CCG) 1 (CTG) 1 (CCG) 1 (CTG) 23 .
DNA methylation analyses of CTCF binding sites. Analysis of DNA methylation levels in two CTCF binding sites flanking the (CTG) n repeat at the DMPK locus was carried out through PyroMethA technique (Pyrosequencing-based Methylation Analysis or PMA). The assays employed were designed to interrogate 11 CpG sites upstream of the CTG repeat (six within the first CTCF-binding site, 'CTCF1'), and six CpG sites downstream of the CTG repeat (three within the second CTCF-binding site, 'CTCF2') [28,29]. Firstly, 300 ng of DNA from each sample was subjected to sodium-bisulfite treatment using the EZ DNA Methylation-Gold kit (Zymo Research, USA), according to instructions provided by the manufacturer. This treatment converts unmethylated cytosines to uracils, while leaving 5-methylcytosines (5-mC) unaffected. The presence of cytosine residues (as indicative of methylation) flanking the CTG repeat expansion was later detected quantitatively through pyrosequencing. Oligonucleotides required for this purpose were custom designed using the PyroMarkQ Assay Design software 1.0 (Biotage, USA) and optimized accordingly (see S1 Table for complete list of primers used in this study).
Briefly, PCR amplification of 15 ng of bisulfite treated-DNA was carried out in a final reaction volume of 25 μl, containing 1X Hot StarTaq Master Mix (Qiagen, Germany), 100 pmol of gene-specific forward primer (either PS-DMPK-F3 for CTCF1, or PS-DMPK-F4 for CTCF2), 10 pmol of gene-specific reverse primer (either PS-U2-DMPK-R3 for CTCF1, or PS-U2-DMPK-R4 for CTCF2) and 90 pmol of biotinylated universal primer (PS-Bio-UNIV2). Amplification was performed with a denaturing step of 5 min at 95˚C, followed by 45 cycles of denaturing for 30 s at 95˚C, annealing for 1 min at 51˚C for CTCF1 or 50˚C for CTCF2, and extension for 45 s at 72˚C. A final extension step was performed at 72˚C for 7 min.
Amplified PCR products (8 μl) were combined with 2 μl streptavidin sepharose high-performance beads (GE Healthcare, UK), 40 μl of binding buffer (Biotage, USA) and 30 μl of MilliQ water, and subjected to single-strand isolation of the biotinylated template using the PyroMark Vacuum Prep WorkStation (Biotage, USA) as instructed by the manufacturer. Isolated products were dispensed into optical plates containing 12 μl of the corresponding sequencing primer (either PS-DMPK-S3 for CTCF1, or PS-DMPK-S4 for CTCF2) dissolved in annealing buffer (Biotage, USA) to a final concentration of 0.4 μM. To allow annealing of the sequencing primer to the template, plates were incubated for 5 min in a heating block at 85˚C, left to cool for 5 min and then placed at room temperature for 5 min.
Pyrosequencing was carried out using the PSQ96 HS platform (Biotage, USA) and Pyro-Mark Gold Q96 reagents (Biotage, USA) according to the manufacturer's instructions and analysed with Q-CpG software (Biotage, USA), which estimates the methylation percentage for each of the interrogated CpG sites. The average methylation value of for all CpG sites analysed in each assay was calculated and the CTCF-binding sites were considered to be methylated when this value was higher than 10% [30].
Statistical analysis
Paired sample t-tests were carried out in SPSS Statistics 19 (IBM, USA) in order to compare ePAL and SI among the two different sample sources, whereas single and multiple linear regressions were used to identify the major modifiers of the age of onset and the degree of SI of each tissue. Frequency curves from total allele distributions were compared through Anderson-Darling (AD) testing, using the kSamples 1.2-4 package for R.
ePAL measured from saliva samples can be used for clinical correlations in DM1
By using SP-PCR we were able to amplify in all of the DM1 samples the expanded CTG allele in both blood and saliva DNA (Fig 1). We observed that the modal allele length measured in both tissues was highly correlated (r = 0.879, n = 38, p < 0.001, Fig 2A) and that in saliva, it was typically a little bit larger than in blood (mean modal allele in blood = 486 repeats; saliva = 529 repeats; t = -1.74, df = 37, p = 0.090, Figs 1, 2A and 2D and data in S1A Fig).
Interestingly, we identified two individuals who presented with a small non-disease associated allele (< 50 repeats) and two additional clear expanded alleles in the two tissue sources analysed (� 50 repeats). These patients showed the typical adult-onset form of the disease. The presence of two expanded alleles is assumed to reflect an early embryonic mutation event [31,32], and because of the difficulty in defining the ePAL or assigning somatic variants to the appropriate allele in such individuals, these two cases were excluded from further analysis.
Previously, we estimated the PAL as the lower boundary of the allele distribution after performing SP-PCR with 200 to 300 pg of input DNA obtained from peripheral blood [16,18]. Here, by using the same approach, we investigated if the lower boundary observed in PBL DNA was conserved in DNA derived from saliva collected at the same point in time. The PAL was estimated from both tissue sources in 40 DM1 patients (80 samples in total). We observed that blood and saliva ePALs were highly correlated (r = 0.908, n = 38, p < 0.001, Fig 2B). In general, the ePAL was larger in saliva than in blood (mean ePAL in blood = 310 repeats; saliva = 414 repeats; t = -5.32, df = 37, p < 0.001, Figs 1, 2B and 2D and data in S1B Fig, data in S2 Table). This difference was most evident in patients with ePALs larger than 150 CTGs for whom only one patient showed a larger ePAL from blood than saliva DNA. When the ePAL was smaller than 150 CTG repeats, the lower boundaries of the distribution of expanded alleles, and therefore the ePALs, in DNA from the two tissue sources were very closely conserved.
With the aim of determining which sample source might be more suitable for establishing genotype to phenotype correlations in DM1, we explored the relationship between ePAL and age at onset of symptoms. One mutation carrier was excluded from these analyses, as he remained asymptomatic at the time of sampling. Linear regression models showed that the logarithm of PAL estimated in blood DNA explained 75% of the variation in age at onset, whereas the logarithm of PAL estimated in saliva DNA accounted for only 66% of the variation in age of onset (Model 1, Table 1). This analysis did not reveal a significant difference (Fisher r to z transformation, z = -0.73, p = 0.465) in the coefficients of determination between blood (r 2 = 0.748, n = 37) and saliva (r 2 = 0.661, n = 37). A previous study has suggested the presence of additional nonlinear components in the regression models of age of onset and the size of the The lower boundary of the allele distribution in each tissue was used to estimate the PAL. The bottom arrowhead indicates the PAL estimated in blood. In patients with blood ePAL < 150 CTG repeats (CR317 and CR145), the estimation of PAL using saliva was about the same, but in patients with an ePAL > 150 CTG repeats (CR333 and CR183) the ePAL measured in saliva was larger than in blood. The top arrowhead indicates the modal allele length for each tissue. For each sample, we indicate the ePAL measured in blood (ePAL), the age at sampling (Age s ) and the age of onset (Age o ). The molecular weight marker sizes are shown converted to CTG repeat numbers. [16]. Thus, we included a quadratic component into the model, but this did not lead to any significant improvement (Model 2, Table 1). Given that the modal allele length in saliva DNA was greater than that observed in blood, it suggests that the net average rate of expansion is greater in saliva than in blood. Likewise, the larger PAL estimated from saliva also suggests that the lower boundary has increased more rapidly in this tissue. This interpretation is consistent with the greater explanatory power of blood ePAL in defining genotype to phenotype correlations and suggests the PAL estimated from blood is likely to be closer to the true PAL than that estimated from saliva.
The behavior of the (CTG) n repeat expansion shows subtle differences among saliva and blood cells in DM1 patients
In order to perform a more detailed quantitative analysis of SI in blood and saliva DNA, we carried out single molecule SP-PCR in 38 DM1 patients. We sized a total of 12,488 mutant alleles with an average of 164 (± 67) molecules per sample (data in S2 Table). The degree of SI (defined as the difference between the 10 th and 90 th percentile of the total allele distribution) was calculated for each sample. As with the ePAL, the degree of SI measured from both DNA sources was highly correlated (r = 0.667, n = 38, p < 0.001, Fig 2C.). Interestingly, excluding the two congenital cases (CDM) in our study, which showed a clearly different SI pattern (Fig 3), we observed a higher degree of SI in peripheral blood than in saliva (mean SI in blood = 329 repeats; saliva = 250 repeats; t = 5.39, df = 35, p < 0.001, Fig 2C and 2D and data in S1C Fig).
By investigating the total allele distributions in DM1 patients with small CTG expansions (< 150 CTG repeats in blood ePAL), we observed that in most of the DM1 patients, both cell sources showed similar allele distributions with a positive asymmetry (Fig 3). However, in non-congenital patients with larger alleles (> 150 CTG repeats in blood ePAL), the mutant allele distributions tended to be more symmetrical, being wider for peripheral blood than for the two tissues analysed. The dashed line corresponds to the line of best fit for the correlation. Points below the solid line indicate a lower degree of SI in saliva than in blood. Panels in D show a diagrammatic comparison of the somatic instability degree (SI) and the estimated progenitor allele length (ePAL) from two different DNA sources of the 38 DM1 patients analysed in this project. The whiskers represent the SI range for each tissue of each patient, whereas the diamonds and triangles indicate the modal allele in saliva and blood respectively. For better comparison, samples were split over three graphs according to the ePAL measured in blood.
https://doi.org/10.1371/journal.pone.0216407.g002 Table 1. Regression models of the relationship between age at onset (Age o ) and the progenitor allele length (ePAL) estimated from two different DNA tissue sources of the same DM1 patient.
Model
Source Adjusted r 2 p Parameter Coefficient Standard error t-statistic p saliva cells and, therefore, with the latter distribution immersed within the former (Fig 3). Differences in the boundaries of the total allele distributions were compared (taking the 10 th percentile as the lower boundary and the 90 th percentile as the upper boundary), and we found that allele distributions in blood and saliva differed to a greater extent in their lower end than in the upper end (mean size difference between the lower boundary = 104.4; upper boundary = 41.9; t = 2.67, df = 37, p = 0.011, data in S2A Fig). In order to analyze and compare the major modifiers of SI in the two DNA sources under study, we ran a multivariate regression model that has been previously used for this purpose [16,18]. As the ePAL measured in blood was considered as the best estimative of the actual PAL, we therefore, used it in the saliva and blood SI models ( Table 2). As expected, more than 85% of the SI variation in blood DNA from DM1 patients was explained by a complex synergistic relationship between the ePAL and age at sampling, whereas for DNA obtained from saliva, the same model explained about 72% of the variation in SI (Table 2; data in S2B Fig), suggesting that other unidentified tissue-specific factors, such as relative DNA repair gene expression levels, might be acting as modifiers of the behavior of the CTG repeats in buccal cells.
Neither variant repeats nor methylation levels act as modifiers of SI in the tissues analysed
We next determined the presence or absence of variant repeats (CGG and CCG) within the DM1 (CTG) n repeat and analysed the methylation levels of two CTCF-binging sites flanking the CTG repeat, in order to determine if cis-acting modifiers might account for the subtle differences found in the behavior of the CTG repeats between the two tissues [27][28][29]33]. The relationship between methylation and SI in DM1 is not yet clear and the presence of variant repeats have been associated with a stabilization of the CTG repeat, which might help to explain the differences we found. However, no CGG or CCG variant repeats were detected in the DNA from blood or saliva in the 38 DM1 patients analysed in this study. This does not exclude the possibility of other rarer variant repeats in these samples. Regarding the methylation study, we considered the DNA samples to be methylated only if the mean methylation of all of the CpGs analysed were � 10%, as measured methylation levels below 10% are considered unreliable [30]. We only detected moderate methylation levels (between 10 to 50%) upstream of the CTG repeat (CTCF1 site) in one of the two CDM cases (being higher in blood than in saliva). Similarly, moderate levels of methylation downstream of the CTG repeat Table 2
Model
Source (CTCF2) were also only detected in the two CDM cases analysed in this project and only in blood DNA (Table 3). All the remaining patients showed mean methylation levels in the two analysed CTCF-binding sites lower than 10% in both tissue sources.
Discussion
By using Southern blot hybridization of restriction digested genomic DNA from blood, it is possible to measure the modal allele length in blood DNA from DM1 patients. Despite the fact that the allele size thus determined shows a highly significant negative correlation with age of onset, this allele size explains less than 50% of the variation in age of onset [8-10, 12, 34, 35]. We previously demonstrated that these poor correlations are due to the confounding effects of somatic expansion and that by using the ePAL, these clinical correlations could be improved [16,18]. Notably, the modal allele size measured in skeletal muscle is typically much larger than that observed in blood DNA [19][20][21]. This observation is consistent with a causal role for somatic expansions driving the tissue specificity of the symptoms. However, repeat lengths in skeletal muscle are usually so large that they cannot be efficiently PCR amplified and need to be measured using Southern blot hybridization of restriction digested genomic DNA. Moreover, modal allele length in muscle provides even poorer age at onset correlations than observed with blood DNA [21]. Again, this can be interpreted as a confounding effect of somatic expansion in driving the modal allele length even further from the PAL in muscle. Thus, other tissues in which the repeat is relatively stable might also be suitable for diagnostic purposes. However, it appears that nearly all other tissues previously assessed in DM1, also contain large somatically acquired expansions [13]. Notably though, the DM1 repeat expansion in cerebellum appears to be even more stable than in blood [29,36], raising the possibility that estimating the PAL in cerebellum could provide even better genotype to phenotype correlations in DM1. However, cerebellum is not an accessible tissue for performing genetic analyses in DM1 patients. Here, we have revealed that the degree of somatic mosaicism of the expanded CTG repeat in saliva is broadly comparable to that observed in blood DNA and thus represents an excellent source of DNA for genetic studies in DM1. During the initial review of this manuscript, Pesovic et. al. [37] characterized the mutational dynamics of the CTG repeat in blood and buccal cells in a small number of DM1 patients carrying variant repeats in both tissues. They described some features that we also found in our larger cohort: specifically, the progenitor allele length was higher and the levels of somatic instability were lower in buccal cells than in blood, with some differences in the CTG mutational dynamics between both tissues, but with overall much more slower dynamics, triggered by the presence of variant repeats that confers stability to the CTG repeat tract [27,33]. Obtaining saliva DNA is a much less invasive method than phlebotomy, being of great benefit especially for those patients with fear of needles. This situation could be particularly relevant in children with autism-like symptoms, A total of 11 and 6 CpG sites were analysed for the first (CTCF1) and second (CTCF2) binding sites respectively. Italicized numerals highlight methylated regions. Methylation levels below 10% were considered as baseline levels. https://doi.org/10.1371/journal.pone.0216407.t003 Saliva and somatic instability in myotonic dystrophy type 1 as commonly encountered in juvenile and congenital DM1 cases [1,2]. Furthermore, saliva has been widely used for carrying out large population screening studies, a study that could be conducted in DM1 now that we have established the mutational behavior and spectrum of the CTG repeat in saliva, and the justification for which increases as we move toward the delivery of novel therapies. Previously, it was shown that the lower boundary of the total allele distributions obtained through SP-PCR were conserved over time and between different tissues [15]. In agreement with this, the PALs estimated from the two analysed tissues were highly correlated in our sample set, with very similar lower boundaries in patients with ePAL < 150 CTG repeats. However, we observed that, though correlated, the boundaries were no longer conserved above 150 repeats, where the PAL estimation was consistently higher when analyzing saliva. This suggests that these differences have arisen from tissue-specific mutational dynamics. Interestingly, the ePAL from saliva explained about 66% of the variation in the age of onset, which is slightly lower than the 75% of the variation explained by the ePAL obtained from blood (Table 1). These data suggest that the PAL estimated from blood more accurately reflects the true PAL. Nonetheless, the ePAL measured using saliva DNA still provided much better age at onset correlations than the traditional measurement of the midpoint of the smear obtained through Southern hybridization of blood genomic DNA. Our results thus indicate that saliva could be an appropriate surrogate for performing genetic analyses in DM1. Similar to Pesovic et. al. [37], we also used the 10 th percentile of the total allele distribution as an estimation of the PAL as an alternative way for measuring this allele size (data not shown). Although results were similar between ePAL and the 10th percentile (as an estimation of the PAL) in both tissues, measuring the 10th percentile of the total allele distribution is more technically challenging, more time-consuming and more expensive than measuring the ePAL only.
Since it has been previously suggested that CTG•CAG somatic instability starts after the first three months of embryonic development [38], right after the separation of the germ layers that give rise to the tissues represented in the sample sources under study (i.e., ectoderm for buccal epithelium and mesoderm for hematopoietic cells) [39,40], it is unlikely that the differences found in the lower boundaries of allele distributions have been caused by an early establishment of embryonic layers with different sizes of mutated alleles. Most likely, this phenomenon could be attributed to parameters in the post-natal mutational dynamics of differentiated tissues. Interestingly, although saliva showed a higher lower boundary and a higher modal allele length, PBLs showed higher levels of SI, providing evidence that the mutational dynamics in different tissues don't just reflect differences in the absolute rate of expansion. Previously using a mathematical modelling approach we revealed that the broad repeat length distributions observed in blood DNA are likely driven by a high frequency of small expansions and a similarly high frequency of small contractions [41]. It is feasible that in buccal cells there is a lower rate of contractions relative to expansions. This would cause a greater upward drift of the lower boundary, but would also result in a narrower range of variants (Fig 4). These observations might be comparable to observations in Huntington disease (HD) expanded CAG repeat mouse models, where a wider population of unstable repeats are observed in striatum in comparison to liver, despite a greater increase in mean allele length in liver [42,43]. Indeed, a previous study found similar results when comparing the DM1 mutation in blood cells and the HD mutation in buccal epithelium [44]. In this study the estimated mutational rates, including both expansions and contractions, were significantly lower in HD buccal cells than in DM1 blood cells, with a lower occurrence of contractions in the former tissue. Although in this case it is possible that the differences in mutational rates could be attributed to the different genomic context of the implicated unstable repeats, the authors hypothesized that the most suitable explanation could be related to cell type rather than disease type.
The subtle differences observed in the mutational dynamics among tissues might be accounted for by the effect of different cis-or trans-tissue-specific genetic factors. It is known, that methylation of CTCF binding sites has been previously associated with increased levels of instability of the CAG•CTG repeat associated with spinocerebellar ataxia type 7 (SCA7) [45], and in DM1 methylation seems to vary among tissues, both in humans and transgenic mice [29]. On the other hand, in some unstable repeat diseases such as SCA1, SCA8 and DM1, the purity of the respective causal allele has been associated with SI, while variants within the repetitive tract confer stability to the alleles [27,33,46]. In our study, although a higher degree of SI was observed in blood DNA in comparison to saliva, we observed: 1) that the methylation levels of the two (CTG) n repeat flanking CTCF binding sites were conserved among the two sample sources analysed; and, 2) an absence of variant repeats in both of the tissues analysed. This indicates that these factors likely do not contribute to the subtle differences we observed in the somatic mutational dynamics among the tissues analysed. [15]). Data from this study suggests that in saliva (lower model), the rate of expansion/contractions is different than in blood, triggering a faster movement of the lower boundary, a more compact allele distribution and faster progression to a normal distribution than in blood and with larger modal allele length in saliva with time. In both models, as the number of CTGs increases, the mean and modal alleles increase and their frequency decreases with time. https://doi.org/10.1371/journal.pone.0216407.g004 It should be noted however that the only samples with moderate methylation levels in this study were the two CDM cases analysed, consistent with previous findings that found that this DM1 clinical form preferentially showed methylation flanking the CTG repeat expansion [28,29,47,48], and it has been suggested that methylation could be used as a biomarker for CDM ( [47], Morales et al, in preparation). The study carried out by Barbe et. al. [47] and this study, are the only ones that have quantified the levels of methylation flanking the CTG repeat expansion. The difference in the levels of methylation found in both studies could be due to inherent aspects of the used assay. Despite this, and in agreement with what the Barbe et. al. [47] found, we also found increased methylation in CDM cases and upstream of the repeat, with one patient showing higher levels of methylation than the other.
Interestingly, the two CDM cases showed a clearly different SI pattern from that observed in non-CDM cases, bearing a higher proportion of alleles that have acquired very large contractions in saliva than in PBLs. Previous studies in HD mouse models have provided similar observations, showing that mice inheriting large mutated alleles (>500 CAG•CTG repeats) can have a reversion of the expansion/contraction balance in some tissues, with the accumulation of contractions playing an important role in the levels of somatic variation [42]. It remains to be determined whether the apparent increase in large contractions in congenital patients could be attributed to methylation in adjacent CTCF binding sites. A more detailed study of congenital cases could be pertinent, considering the potential therapeutic benefit of inducing contractions with methylating agents [49].
Conclusions
By comparing two tissue sources, our study has assessed the suitability of employing buccal cells as an alternative tissue source of genetic material to carry out informative molecular analyses in DM1, providing more accurate prognostic information, something that cannot be done with other DM1 tissues due to the excessively large repeat size compared to blood and buccal cells from saliva. Also, the data we present here provide new insights into the CTG tissue-specific mutational dynamics, a feature that is becoming increasingly important in terms of disease severity and progression, and as a target and marker for therapeutic intervention [16,18,27,33,42,50,51]. To achieve effective somatic therapy of the DM1 repeat expansion, careful serial monitoring of therapeutic efficacy and detailed knowledge of the longitudinal CTG mutational dynamics are essential. Clearly, non-invasive access to a readily accessible tissue in which somatic mutational dynamics have been characterized will facilitate inclusion of a large representative DM1 population with the least possible risk.
Although previous studies have already suggested the use of buccal cells for diagnostic purposes in DM1 [52,53], a detailed quantitative validation through single molecule SP-PCR in order to evaluate the suitability of using saliva instead of blood, which is the standard source for DNA testing in DM1, has not yet been performed in DM1 patients. Even though we found subtle differences in the mutational dynamics in saliva and blood DNA, we provide evidence that the PAL estimation through the SP-PCR assay using DNA obtained from saliva constitutes a good surrogate tissue and less invasive approach for DM1 diagnosis. Our results are particularly relevant given that in some of the main tissues affected in DM1 (such as skeletal muscle), determination of reliable estimates of the PAL is challenging due to the high levels of somatic mosaicism, which potentially compromises the quality of clinical correlations obtained. On the other hand, tissues that have been proven to be especially stable (such as cerebellum) are not accessible, which limits their usefulness for performing routine molecular analysis. As demonstrated here, the use of saliva DNA for these purposes, in combination with SP-PCR, constitutes a useful alternative when the collection of blood samples is not feasible or problematic.
Supporting information S1 The 10 th percentile of total allele distribution is taken as the lower boundary, whereas the 90 th percentile is taken as the upper boundary. The box plot shows a much larger variation on the lower boundary than in the upper boundary between the two tissues. The interquartile ranges (IQR) are indicated as boxes; the medians and means are represented by solid and dotted lines respectively subdividing the boxes; error bars indicating the 90th and 10th percentiles are shown as whiskers above and below the box; data points beyond the whiskers are outlying points. B. Polynomial relation of the degree of SI in blood and saliva with the age at sampling and the logarithm of the progenitor allele length (ePAL) estimated in blood. The degree of SI was measured as the difference between the 10 th and 90 th percentiles of the allele distributions in each sample source. The predicted functions for the polynomial multiple regressions are shown as a mesh. (PDF) | 7,520.8 | 2019-05-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
MERGING PROFESSIONAL AND COLLABORATIVE LEXICOGRAPHY: THE CASE OF CZECH NEOLOGY
This paper aims to relate two linguistic phenomena: neology (along with sources for its study) and collaborative lexicography. A pair of case studies is presented con- cerning two thematically defined groups of recent Czech neologisms: those abusing the Czech ex-president V. Havel’s name and those reflecting the Covid-19 pandemic. An initial dataset was provided by the user-generated content web dictionary of non-standard Czech (cid:2) Ce (cid:2) stina 2.0 and the Neomat neology database, fostered by professional linguists. The objective data from a monitor corpus of Czech is used in contrast with the initial dataset and thereby leads to some open questions, especially with regards to the extent to which amateur and professional, two branches of lexicography, can inspire and enrich each other. ( (cid:2) ska (cid:2) r person wearing a face mask;’ (cid:2) ska (cid:2) r person the obligation to wear a face mask during coronavirus epidemic;’ rou (cid:2) skaria´n ‘a person who has a strong belief in the benefits of face masks against the spread of coronavirus therefore wears it constantly; the opposite is bezrou (cid:2) skaria´n ’) or other related concepts ( rou (cid:2) skie ‘selfie in a face mask;’ rou (cid:2) skiss ‘kiss through a face mask’). The Czech Republic has become a ‘face mask power,’ which is well reflected in several neologisms: rou (cid:2) spublika ‘obligatorily face-masked country’ ( < rou (cid:2) ska þ republika ‘republic’); rou (cid:2) ski-sta´n ; prorou (cid:2) skovanost ‘degree of voluntary wearing of face masks even in situations where their wearing is not obligatory.’ To a much lesser extent, this also concerns other protective equipment: respı´k/respo (cid:2) s ‘respirator;’ respiro (cid:2) zec ‘person wearing a respirator on his fore-head’ ( < respirator þ jednoro (cid:2) zec ‘unicorn’); sn (cid:2) ehula´k ‘special suit for paramedics testing people for coronavirus’ (lit. ‘snowman’). Other keywords include thus
Introduction
By its very nature, neology is probably the worst lexicographically processed part of the lexicon. It poses a significant problem for lexicographers, starting with the question: which newly registered word will actually come into use and which will sink into oblivion. Earlier dictionaries of neologisms, mostly on paper, were often published with considerable delay and recorded words that were no longer used at the time. Today, this handicap can be easily overcome by regularly updated electronic dictionaries. Linguists register new words in specialized neological databases; in addition, there is also bottom-up lexicography (Carr 1997), built on voluntary contributors willing to take part in proverbial harmless drudgery and generate various collaborative dictionary projects, such as Wiktionary, Urban Dictionary, Macmillan's Open Dictionary, Wordnik, and many others. There is no indication that these users perceive their lack of professional qualification as an obstacle to participation in these collaborative works. Professional lexicographers can significantly benefit from the efforts made by amateur colleagues by applying their theoretical expertise and should strive to find the desired synergy between these two approaches rather than look askance at laymen contributors. In other, Michael Rundell's (2017: 1), words: 'Crowdsourcing -in its various forms -should be seen as an opportunity rather than as a threat or diversion.' it was filled by the author himself and a handful of his friends and family members, but, over time, the ranks of contributors grew, as did the length of the list of entries. 3 Currently, the C20 dictionary contains more than 20 thousand words; in the last year, it grew by more than 4,000 entries, an average of 135 entries per week, while about 80 of these are edited and published. Individual entries are written by users (of which there are thousands, though only tens of them are actually active 4 ), and Kavka, who still oversees the project, edits, and, over time, publishes them.
In October 2018, a selection of entries from the online dictionary was published in the book format under the title Hacknutá ce stina ('Czech Hacked,' hereinafter: CH -Kavka and Skrabal 2018). It can be seen as a sequel to older dictionaries of non-standard Czech (Ou redník 1988;Obrátil 1999Obrátil -2000Hugo et al. 2006) that are quite popular among Czech readers. Knowing that a book, a manufactured article, would be -unlike the web dictionary -unchangeable and irreparable, it was necessary to engage a professional lexicographer in the project. In particular, it was essential to adapt the source material, edited up to this point by an amateur, to specific professional standards. This was one of the biggest challenges of the project: we did not want to smooth out the peculiar 'handwriting' of individual contributors and get rid of the charming tinge of amateur lexicography; on the other hand, it was necessary to establish at least elementary lexicographic principles and stick to them during the compilation process.
The book itself contains more than 3,000 entries, which represent over a quarter of the then online volume (about 11,000 entries). The selection criteria were not strictly defined but rather subjective. The aim was to include all categories of words so that vernacular, slang, loanwords, neologisms, and others appeared in the dictionary alongside frequent manifestations of linguistic creativity and humour. These entries are assumed to represent the proverbial 'lexical chronicle of the present-day,' describing trends and situations in politics, society, and sports that have arisen over the last ten years. Because they mainly capture temporary and ephemeral realities, they are mostly nonce words that manifest a particular contributor's idiolect and/or a proof of his or her linguistic creativity. The use of such ad hoc expressions is then understandably very limited: they are rather puns and wordplay jokes with a very short life span.
The book has received considerable attention among both the general public and the experts (four reprints, 19,000 copies had been printed by the end of 2019). But what we consider to be especially positive is the fact that it attracted readers' attention not only to the Czech lexicon itself but also to dictionaries in general. 5 In addition to the further growth of active contributors, an offspring series of short videos for the internet television channel Stream.cz appeared (almost 600 words were presented in 32 parts), followed in 2020 by a podcast devoted to professional and lay slang terms and even a board game. A sequel to CH could be released in the future, albeit with some slight conceptual changes (see Section 4).
Neomat
Our other data source is an online 6 database of Czech neology, named Neomat (hereinafter: N). We perceive it as an, in a way, 'orderly' supplement to C20 for our purposes, under the umbrella of an official institution (Czech Language Institute, within the Czech Academy of Sciences). Today's version of the database, revised in 2005, is conceived more broadly than theretofore, as an archive of lexical dynamics (registering new verbal valencies or shifts in the stylistic evaluation of words, among others), and is open to both the lay and professional public. Although there is the possibility of providing feedback, it does not primarily assume an active, contributing role for its users. The main burden in filling the database thus falls on professional excerptors. Clearly, the project is far from collaborative lexicography, at least compared to C20.
The archive is constantly being extended and updated at weekly intervals; at the end of October 2020, it had 345,372 entries (tokens, not types). The semantic content is not the focus of this archive, and therefore entries do not include a dictionary definition, with infrequent exceptions.
Corpus Online
C20 and N, which do not include any frequency information, are complemented by corpus data. Regularly updated corpora -ideally, on a daily basis -are most suitable for this paper's purpose. In fact, such a corpus is newly available within the Czech National Corpus infrastructure; it is called Online 7 (hereinafter: O), and it allows to observe the dynamics of the Czech lexicon, including neologisms. Its strength lies in two major points. Firstly, none of the numerous synchronic corpora of Czech provides such an up-to-date language. Secondly, it is not limited to the simply searchable web, as are others (the TenTen or Aranea corpora families), but may get beyond the public domain. As of October 22 th , 2020, it has amassed more than 6.4 billion tokens and was structured into several subcorpora, representing individual media types. The acquisition of texts is targeted: O focuses on the dynamic web content -be it professional (such as most of the media portals) or user-generated (such as social networks, discussions, and forums). However, some text types representing a rather formal language (especially commercial presentations of companies) are not downloaded at all. The corpus includes internet discussions (under online articles) and hobby discussion forums, as well as the content of social networks (Facebook, Twitter). The opposite of this private communication is news websites (both traditional newspapers and magazines and leisure journalism). With respect to neology, the latter media seem to reflect a higher degree of social acceptability and entrenchment of new words and, conversely, a lower degree of personal involvement and expressiveness. Whereas social networks, discussions, and forums mirror naturally spoken Czech much better.
Case studies
Our goal in this section is to describe the specifics of two thematically defined groups of neologisms. We deliberately chose different examples in terms of the time of their coining and thus also the degree of their dynamics (see Figures 1 and 2) and entrenchment in contemporary Czech. While the first group refers to the legacy of an already deceased statesman and has been gaining ground in Czech for several years, the second group is brand new and is thriving even as we write this paper. Each group also differs in terms of the number of its members: it is evident that the Covid-19-related words cover a much larger spectrum of reality and will therefore provide richer material.
Case study #1: Havlophobes vs. Havlophiles
The first case study notes the traces of former Czech president Vá clav Havel 8 in the current Czech lexicon. These traces are robust: even though Havel has been dead for a decade, his name has given rise to a large number of neologisms, most of which have negative connotations -opponents of Havel's philosophy and ethos use them to depreciate Havel's sympathizers. Often contempt for Havel is signified by a lower-case 'h' in his name.
Some of these words -a total of 13 entries -were already recorded in the dictionary of Czech neologisms (Martincová et al. 2004: 148). Furthermore, other Havel-related words can be found in both C20 and N. As of October 22 th , 2020, C20 had a total of 14 entries containing the string havel/havl, 9 plus additional 7 entries directly linked to Havel, and there were 65 relevant neologisms in N. However, there were only 6 entries (out of 80, that is, 7.5 %) shared by these two lists. Together with two extra words from Martincová et al. (2004) that were found neither in C20 nor in N, we gained exactly 82 entries for our initial dataset, which can be compared with frequency data in O.
A significant majority of neologisms (77%) are documented in O, but often only in negligible frequencies: 52 words (63%) have an ipm less than 0.1, while only three words have an ipm higher than 1.0: havloid '[derogatorily] a supporter of the first president of the Czech Republic V. Havel' (6.41 ipm); havloidní 'of, relating to, or associated with havloid*' 10 (3.65); havlista 'a fanatic admirer of V. Havel' (2.84). These often serve as a base for further derivation; for example, havloid is a base for the adjective havloidní as well as the nouns havloidismus and havloidsatanista. However, besides these three derivates from our dataset, a survey of O offers a far more diverse inventory of neologisms and nonce words, the most recurring of which are the nouns havloidka and havloidismus, the adjective havloidsk y, the adverb havloidn e, and the compounds chazarohavloidní, havloidiot, and posthavloidní. The ipm's of all these derivates range from 0.1 to 0.01. Other words (no less than 600 different word forms) 11 do not even exceed 0.01. Similarly, a plethora of neologisms is derived from pravdolá ska r ('a person who professes the legacy of [. . .] V. Havel (based on his famous statement: 'Truth and love must prevail over lies and hatred.'), with a liberal worldview, sometimes acting as a pathetic defender of democracy' < pravda 'truth,' lá ska 'love') including its several synonyms pravdolá skista, pravdolá skovec and pravdolá skovec, the female form pravdolá ska rka, the noun derivates pravdolá ska, pravdolá ska rství, pravdolá skismus, pravdolá skovství, the adjective derivates pravdolá ska rsk y, pravdolá skovní, pravdolá skov y, and the adverbial derivate pravdoláska rsky. Except for two words: pravdolá ska (0.58 ipm) and pravdolá skov y (0.19), none of them has an ipm higher than 0.06. It is obvious that, from a lexicographer's perspective, none of these words would normally get into a list of entries of any dictionary except those specializing in neologisms.
The dynamics of the use of this lexical set is indicated in Figure 1. The actors of the Havel discourse can be divided into two rivalrous camps, according to their attitude towards the ex-president, and are named variously, largely derogatorily. Havel's critics seem to be 'louder,' concentrating chiefly on social networks or antisystemic media (while mainstream media refer to Havel seldom and in a neutral way) and trolling online discussions. Remarkably, there are far fewer of these words for a group of Havel's opponents: in our dataset, we find only the word l zinená vistník ('opposite of a pravdolá ska r; a hater of ex-president Václav Havel') and its rare synonym l zinená vista, and the somewhat neutral adjective compound antihavlovsk y 'anti-Havel*,' or its Czech calque variant protihavlovsk y. Thus, Havel's supporters seem to be more benevolent towards their ideological adversaries and to make do with already coined words when referring to them. At the same time, they often identify with pejorative sobriquets (in the same way decadents once adopted the originally mocking denomination for them), thereby as if blunting their edge. Paradoxically, a given word may have ambiguous semantic prosody, depending on who uses it -whether it is a Havel's opponent or a supporter.
3.2. Case study #2: As many face masks you possess, as many times you are a human being A sad truth of the times: whoever does not write about Covid-19 these days need not bother writing at all. The unprecedented pandemic was probably reflected in every language, including Czech, especially in the area of the lexicon. New words, multi-word units, and even idioms and proverbs arose (and then disappeared, often unrecorded) spontaneously and rapidly and flooded mass media and the internet. With a bit of exaggeration, we might say that it would be possible to compile a special dictionary of only these words. Apparently, they are often motivated psychotherapeutically: to transform something unknown and hostile hiding behind official, medical terms into a common language understandable to laypeople with the desired effect: to make life in forced quarantine more bearable. Even numerous words trivializing and mocking the whole situation have occurred, reflecting the Czech people's grim humour. 12 Most neologisms have an expressive nature ( cus virus 'greetings at the time of the coronavirus epidemic;' covnivál 'a person who is too keen about twaddle and hoaxes regarding coronavirus' < hovnivál 'dung beetle') and clearly manifest a speaker's attitude (koronapi cus 1. unbalanced person, influenced by a media craze around the coronavirus, with unlimited belief in it and eager to make unsubstantiated and unfounded decisions; 2. author of mammoth-scale measures against coronavirus; 3. coronavirus' < pi cus 'fucker').
The rise of neologisms was also recorded in C20. The very first Covid-19 related word is from January 26 th , 2020: skorovirus (skoro 'almost') 'unconfirmed case of coronavirus from China.' 13 Since then, the number of neologisms has grown daily; as of November 22 th , it numbered no less than 711 entries. N is only slightly less affluent than C20: it stored 707 Covid-related entries. 14 The common intersection of both datasets, however, is surprisingly small: 129 out of 1292 (10%). 15 This ratio would decrease slightly (along with a decrease in the number of lemmas) if we did not consider purely spelling variants of the same lemmas; 16 yet, there would still be a significant incompatibility between the two lists of entries: only every tenth word is shared. This result (along with that from the previous case study), which is somewhat startling, demonstrates at least two things: (1) different results are obtained when applying different approaches to the acquisition of neologisms (crowdsourcing, collaborative lexicography  controlled excerption performed by professionals), and (2) difficulties in capturing this unstable and fluctuating part of the lexicon. This is exactly why the complementarity of these different approaches and data sources is desirable. Only by combining these sources and contrasting them with frequency data from a monitoring corpus we will get a more transparent overview of a) the behaviour of a given lexical set in the current language and b) candidates for inclusion in the official dictionary of Czech, which, coincidentally, is emerging at the moment (Akademick y. . . 2012nn.).
The motivation for designating new Covid-19-related concepts in Czech is multifarious, and the diversity of word-forming patterns, as well as language creativity, can be well examined. First, we set apart words that already exist in Czech and have gained a new meaning during the pandemic; there are only 21 of them (1.6%). A few examples are based on either phonological or morphological proximity of both old and new meanings (nádr zka 'small reservoir' > 'face mask' < na 'on' þ dr zka 'gob;' koroner 'coroner' > 'a person infected with coronavirus;' kor y s 'crustacean' > 'coronavirus') or conceptual closeness (náhubek 'muzzle' > 'face mask;' nechrán en y styk 'unprotected contact' > 'violation of coronavirus quarantine by meeting without a face mask').
As regards truly new lexemes, the simplest way is to create a compound with the first component korona-or its truncated version koro-(altogether approximately 42% of new entries). There is a difference again between N preferring longer forms and C20 with shorter ones, especially if the second part of a compound starts with n-: koronanákaza  koronákaza 'the coronavirus epidemic'), often both variants referring to the same concept compete with each other: koronadovolená 'forced home-office due to the spread of a new coronavirus'  korodovolená  korolená 'forced leave due to the coronavirus epidemic.' Obviously, these compounds are too long for everyday communication, and users tend to shorten them or replace them with blends. This group of words is highly productive, as combinatorial limits are not completely obvious here: the initial segment koro(na)-can be combined with both concrete and abstract nouns, with words of domestic and foreign origin, it can form all content words. 17 In fact, this type of formation is so straightforward and widespread that it can actually give rise to a new prefixoid, such as euro-, bio-, ex-or others.
The truncated version of the given segment can also be inserted inside actual words, as a few examples suggest: ekoronomika 'economy severely affected by the coronavirus pandemic' (< ekonomika 'economy'); hypokorondr 'a person who constantly thinks he has the coronavirus or will soon catch it' (< hypochondr 'valetudinarian'); velikoronoce 'Easter during the coronavirus epidemic, when national quarantine was declared.' However, this type partly overlaps with another group (of mostly non-compounds) where there is only a minor modification of an existing word: koronténa 'quarantine for people suspected of having caught the coronavirus' (karanténa 'quarantine'); mate rírou ska 'face protection during the coronavirus pandemic that was sewn by one's mother' (< mate rídou ska 'Thymus,' mate rí 'maternal'); maskurbace 'touching a respirator or a face mask too often' (< masturbace 'masturbation'); syndrom vymo rení 'state in which one has fully identified with the harsh Covid-19 regulations and continues to live by them [. . .].' Often the comic effect of combining a foreign and domestic element is desired: korokdák '1. hoax, nonsensical statement about coronavirus; 2. ill-considered proposal for measures to mitigate the effects of the coronavirus epidemic' (< kdá kat 'cackle'). Users are even aware of the metalinguistic nature of some neologisms: covid dokonav y, 'coveted condition in which coronavirus disease has definitely disappeared; opposite of covid nedokonav y' (< vid dokonav y 'perfective aspect'); covid nedokonav y 'condition in which the coronavirus disease Covid-19 is still present on planet Earth; opposite of covid dokonav y' (< vid nedokonav y 'imperfective aspect').
There are only a few loanwords without any modifications, such as covid/korona free, social distance, early openers, virus stories, or Coronagate. This proves that such elements seem unnatural in highly inflected Czech, and a speaker tries to adapt them naturally (it might even be a challenge for him or her) to the Czech phonetic, morphological, and grammatical system. Lockdown is a perfect example: it fits into the target language smoothly, being inflected easily as a hard-stem masculine, and even serves as a root word for subsequent derivation: the adjective lockdownov y, the aspectual verbs lockdownovat, lockdownout, zalockdownovat, the diminutives lockdownek, lockdowní cek or numerous compounds such as pololockdown 'semi-,' pseudolockdown, skorolockdown 'almost-,' samolockdown 'auto-,' protilockdownov y 'anti-,' fulllockdownov y and many others, albeit with negligible frequencies. The smoothness of the adaptation process indicates why the potential Czech equivalents recorded in C20 (zdravora and zarach) are not found in O at all, and the phonetic transcription lokdaun rarely (0.23 ipm) -they are felt to be superfluous.
Other examples of secondary word-formation from previous neologisms include cov can 'a person who is easily manipulated by bigwigs abusing panic during a coronavirus epidemic to achieve their political goals' (< covid þ ov can 'citizen who acts like a sheep,' coined at C20 in 2009 as a blend of Czech words ovce 'sheep' and ob can 'citizen') or digitá lní domád 'a person working at a home-office during forced quarantine,' modifying the original digitá lní nomád (doma 'at home') 'a person who travels the world while working remotely for his clients, all he or she needs is a computer and an internet connection' from 2015.
New multi-word units appear, too, of course. In this respect, the approach to lemmatization differs for both sources: while N is based on one-word lemmatization and collocations are available in sublemma only via advanced settings, the C20 list of entries commonly includes multi-word lemmas (in our dataset, a total of 68 cases out of 711, which is 9.6%). Newly formed combinations of older words ( skola v py zamu 'distance school teaching during coronavirus epidemic,' lit. 'school in pyjamas;' zíznivé okno 'temporary pub set up after the closure of pubs due to the spread of coronavirus,' lit. 'thirsty window') predominate, only some use neologisms (na covid enou 'farewell at the time of the Covid-19 disease;' koronovirová opona 'closed state border due to a coronavirus pandemic' < zelezná opona 'Iron Curtain'). Older phrases and idioms are exploited in an original and playful way: dát si dvacet 'have an hour free and sew twenty face masks during that time' (< 'have forty winks'); slápnout do pedálů 'sit down at a sewing machine and sew face masks on it all day' (< 'pump the pedals'); na adama 'without a mask, that is, with an exposed face [. . .]' 18 (< 'skinny dipping'). Even topical modifications of older proverbs are emerging: Rou ska kvapná, má lo platná 'A hastily made face mask makes waste' (< Práce kvapná . . . 'Haste makes waste') or Kolik rou sek má s, tolikrá t jsi clov ekem 'The number of face masks you possess, the number of times you are human' (< Kolik jazyků/ re cí umí s. . . 'The number of languages you know. . .') to name just a few (cf. also Semelík 2020: 5).
As for the keywords that serve as the derivational basis for neologisms, they are most often official medical terms, that is, koronavirus, Covid-19, or just the general designation virus (or vir in spoken Czech). Together, these three types cover up to 61.8% of the neologisms from our dataset. Furthermore, a face mask is involved very frequently, which is not surprising, given the enthusiasm with which the Czech people began to make protective equipment at home after the government failed to provide them in spring 2020. Besides the official term for a face mask (rou ska or ú stenka), literally, dozens of new words emerged not only for the mask itself but also for its various types (ropu ska 'used, unwashed face mask' < ropucha 'toad' þ rou ska; sourka 'coronavirus face mask sewn from old boxer shorts or briefs' < sourek 'scrotum;' trikiny 'swimsuit comprising a bikini and a face mask from the same material; see also koronakiny, trojdílné plavky [three-piece swimsuit]'), people (not) wearing them (rou ska r 'a person wearing a face mask;' bezrou ska r 'a person violating the obligation to wear a face mask during coronavirus epidemic;' rou skarián 'a person who has a strong belief in the benefits of face masks against the spread of coronavirus and therefore wears it constantly; the opposite is bezrou skarián') or other related concepts (rou skie 'selfie in a face mask;' rou skiss 'kiss through a face mask'). The Czech Republic has become a 'face mask power,' which is well reflected in several neologisms: rou spublika 'obligatorily face-masked country' (< rou ska þ republika 'republic'); rou skistán; prorou skovanost 'degree of voluntary wearing of face masks even in situations where their wearing is not obligatory.' To a much lesser extent, this also concerns other protective equipment: respík/respo s 'respirator;' respiro zec 'person wearing a respirator on his forehead' (< respirator þ jednoro zec 'unicorn'); sn ehulák 'special suit for paramedics testing people for coronavirus' (lit. 'snowman').
Other keywords include karanténa 'quarantine' (tyranténa 'restriction of personal freedom during the coronavirus crisis;' karande 'date which respects coronavirus quarantine and thus a safe distance; ideally in a virtual and contactless mode' [< rande 'date']), pandemie 'pandemic' (infodemie 'disseminating an excessive amount of information about a problem, often unverified and misleading;' pandemá cek 'child conceived at the time of the coronavirus pandemic'); sometimes it is a combination of two keywords (koropanda 'Covid-19 pandemic' [< panda 'panda bear']; pandavirus 'Chinese coronavirus (Covid-19 disease)'). Numerous neologisms originated from the name of the Czech epidemiologist (and the short-term Minister of Health) Roman Prymula, who became the symbol of the fight against the pandemic: prymulex 'set of government measures for coronavirus;' prymulka 'home-made face mask of poor quality (e.g., from pyjamas), worn because of duty during the coronavirus epidemic;' deprymulovan y 'depressed by measures stemming from the head of the chief epidemiologist Roman Prymula'). Other people were reflected sporadically in our dataset, like Prime Minister Andrej Babi s: 19 na babi se (lit. 'in Babi s's manner') '[to wear] a mask with strings untied.' Surprisingly, a word inspired by the name of the former Minister of Health Adam Vojt ech appeared only scarcely (5 entries in C20).
The last issue we want to touch in our pre-corpus analysis is the number of senses. Most neologisms are monosemous, as we found only 45 cases of polysemous words in the C20 sub-dataset (6.3%). Most of them have two senses, like koronovan y '1. infected by the coronavirus, 2. infected by the brain-washed media hysteria over the Chinese coronavirus [. . .]' (< korunovan y 'crowned'), in isolated cases, three or more senses can be found. 20 This could prove semantic ambivalence of neologisms, as their meanings are far from definite but are still being formed and discussed. Yet, an overwhelming majority of neologisms pass out before gaining another sense.
It is a sad paradox that the term 'coronavirus' is by no means new (it goes back to 1968), but until 2020 its use has been limited to the professional domain of language. Therefore, it is logically registered, albeit scarcely, by the older corpora of contemporary Czech: 30 hits, 0.25 ipm in SYN2015 (K ren et al. 2015). However, unlike the previous case study, we would look in vain for its-related neologisms from our dataset in these corpora. Regularly updated web corpora 21 are the best choice. Figure 2 shows a dramatic increase in the frequency of selected Czech words (koronavirus or covid and their derivatives) in the initial stage of the coronavirus crisis, yet a much more stable course later on despite the second wave in autumn 2020. Online news websites, be it mainstream or tabloids, refer to the pandemic significantly more often than social media, which is the opposite of the previous case study.
Not only the mere frequencies are dynamic and fluctuating, but also collocates of some neologisms may change over time. If, for example, we compare collocation profiles of these expressions from the first and second wave (March 25 th vs. October 25 th , 2020), the results may be quite surprising. Out of the 40 top collocates for either period (attribute: lemma, window span: -3 to 3, sorted according to logDice), there are only 5 (7%) in common: boj 'fight,' epidemie 'epidemic,' onemocn ení 'disease,' pacient 'patient,' and sí rení 'spread.' This overlap, however minimal, represents the invariant core which clearly refers to the semantics of the word ('a disease that is spreading among patients until it becomes an epidemic and must be fought against'), while the variables reflect the temporal and/or spatial specifics (for example, banální 'banal,' celoplo sn y 'nationwide,' Charles, mikroskopick y 'microscopical,' odhalen y 'revealed,' popíra c 'denier,' princ 'prince,' rychlotest 'rapid test;' frequent are the numbers referring to statistics of infected/dead patients). Table 1 gives a brief outline about Covid-19-related lexemes (not necessarily neologisms) in terms of their saliency within selected media types. Data from the SYN2015 corpus is added for a comparison with a pre-pandemic situation.
The steep rise in the frequency of most words is not at all surprising, and many of them are undoubtedly hot candidates for the Word of the Year 2020. It is much more interesting to note the unequal distribution within different media types: newspapers, regardless of their degree of reliability, provide information on the topic virtually without interruption (and their ipm's in the table exceed the average level for almost every word) and discussions under articles are -from a lexical perspective -just their extension. The rich Covid-19 discourse also takes place on anti-establishment websites, which are an alternative to the official media for a specific part of society. On the contrary, the topic is discussed to a much lesser extent in the private sphere. Various hobby forums are thematically predefined, and the current discourse infiltrates them far less and rather furtively. The low ipm values may surprise among users of social networks, although they can probably be attributed to the preference for more colloquial variants of official words (for koronavirus: koro nák, koroná c, korona, and others, including non-diacritical variants in the online environment).
Conclusion
If we are to summarize our findings from both case studies briefly, it is necessary to underline the following points: The Czech language has coped with the flood of neologisms during the Covid-19 crisis, proving its flexibility and openness to foreign elements (which some speakers consider undesirable). The Czechs have struggled with a difficult situation by using creativity and humour, albeit sometimes at the cost of being politically incorrect. 23 On the other hand, the Havel discourse is already firmly entrenched in the Czech environment and has been ongoing at least since Havel's death in 2011, with some regularly recurring peaks around the anniversaries connected with the ex-president's life. Havel's objectors, concentrated mostly on social networks, internet discussions, and antiestablishment media, seem to have the upper hand in it, as most neologisms have markedly defamatory traits, although Havel's sympathizers often identify with these unfavourable appellations.
Both lexical sets also differ in their extent (1292 vs. 82 in our datasets). Thanks to the prefixoid korona-(less also covid-), the first group is de facto an open class, continually growing, while the growth of the second one is limited to ad hoc created nonce words with petty ipm's. There are similar limitations regarding the typology of these neologisms too. In the pandemic discourse, almost any word-forming strategies available in Czech are being used: derivation, compounding (including blending), shortening, borrowing (including calquing), creating multi-word units, or figurative use. On the other hand, Havel-related words are typically simple derivatives or, to a lesser extent, also compounds.
The original datasets obtained from various sources were significantly different, having only every thirteenth or tenth, respectively, word in common. Therefore, if we want to examine the group of neologisms in question with a sufficiently empirical approach, only a combined study of multiple sources would provide adequately reliable information. In other words, amateur contributors help to minimize gaps in a given lexical subsystem, drawing attention to neologisms that professional lexicographers might have omitted.
Outlook and discussion
Among other things, the possible sequel to CH mentioned earlier (2.1) leads us to consider merging methods of collaborative and professional lexicography. As its creators, we are looking for a way to present the sequel in a fresh way, especially if the book form is preserved. There are certainly many possible solutions, of which contrasting user-generated content with objective data represented by language corpora (O in our case) appeals to us the most. How specifically to do this depends on the degree of data-drivenness. The list of entries, conceived purely subjectively in CH, should be newly compared to corpus data, and a certain frequency limit should be set for entries to be included in the dictionary. Of course, the microstructure of the entry should also change (see Figure 3): it will be expanded with frequency data, statistics according to various media types, the first occurrence in the corpus along with the date of entry in C20, as well as examples provided by contributors and newly compared with real examples from the corpus. Alongside the lemma, thumbs up/down mirror the other users' rating and the popularity of the given entry. The icon of a magnifying glass refers to the word being found in O, the crossed-out glass to an entry recorded only in C20. Larger space will be needed for these updated entries, which will logically reduce their number if a book is to be an outcome.
Undoubtedly, it is difficult to predict which of the deluge of new words will become permanently entrenched in the language (as a matter of fact, only a very few will 24 ) and which will not. The lexicon as the most dynamic level of language -especially in turbulent timesis a mycelium for linguistic change. Nevertheless, it is good that new words are being recorded to an unprecedented extent, not only by linguists (N) but also by laymen (C20). Even if they do not get into the official dictionaries, they contribute to a more complex and apt description of contemporary language, thus capturing not only the literary form of the Czech language but also its non-standard, yet commonly spoken variant. This is especially important if we consider the actual post-diglossic situation in Czech (Bermel 2014). What is especially gratifying is that this initiative arises 'from below,' from amateurs who voluntarily record neologisms -for it implies that they do care about their mother tongue. After all, there is nothing new under the sun: what would the Oxford English Dictionary be without thousands of anonymous contributors?
We are aware of the disadvantages and limits of amateur lexicographers (cf. Gao 2012: 427-430, Hanks 2012: 77-82 or Lew 2014): due to a lack of scientific training, they write their definitions instinctively and subjectively, they come up with unnatural example sentences and even create their own words. Moreover, they are selective when choosing which word they will contribute to the dictionary with -originality, wittiness, and intense emotions are pursued, whereas 'boring' words (read: those overused by mass media) are often neglected. However, these objections can be muted by the intervention of a professional lexicographer as a dictionary's editor who can alter problematic entries in terms of wording, examples, et cetera, or supplement missing entries, seemingly unattractive ones for contributors.
Amateurs' straightforwardness can be, on the contrary, counted as an advantage too. By creating their own entries, they may show us their idea of user-friendly written entry and how a dictionary should look like from their perspective (cf. Meyer and Gurevych 2012: 291). Their engagement can also reveal more general notions of language itself among the public, as the metalinguistic role of language gains prominence. This, along with the unquestionable merit of zero or minimum costs, helps to outweigh the drawbacks. The synergy between lay-approaches and theory-based lexicography, supported by corpus frequency data, contributes to a richer description of (not only) neologisms.
Similarly, inspiration in the opposite direction may be desirable too: the quality of collaborative web dictionaries will certainly improve with the adoption of specific lexicographic standards that professionals can provide, as an upgrading process of the C20 raw material into CH suggests. More strict rules adopted in CH should subsequently affect editing rules for C20, and with more reliable editors (than just the one at present), the whole project may become semi-professionalised. Theoretically, it is also possible to supplement C20 with links to appropriate corpora and/or N, where users would get more information about using the word in its natural context. Although content would no longer be exclusively user-generated, it would remain user-oriented.
Thus, the possibilities are open and include theoretical questions, which may have received considerable attention in Czech linguistics, but still with little use of corpus methods and data. 25 Related to our article, one of the many potential research questions is this: which of the words posted at C20 are actually used by Czech speakers -and to the contrary, which words are actually used but yet not reflected at all by crowdsourcing, and why would that be so? It will be no less interesting to see how firmly the words from the two groups described above have become entrenched in Czech and to what extent reference materials will reflect them in the future. However, this requires a sufficient time interval as well as regularly updated corpus data.
Notes
1. Attitudes to neologisms, especially to loanwords (and especially from English), can differ within the public, from favorable to neutral to negative; cf. the recently published study regarding Slovak (Panocová 2020). A specific context common for both Czech and Slovak should be mentioned here, viz (from our perspective unfortunate) the long-term influence of the prescriptive orientation of linguistics, the residues of which persist up to now. This can also be a strong source of motivation for laymen: to be allowed to contribute their piece of knowledge without a language expert's testimonial. 2. https://cestina20.cz// 3. Increase of the list of entries in the last 5 years: 2015 -5,801 words; 2016 -7,078 words; 2017 -9,605 words; 2018 -12,200 words; 2019 -16,250 words. In the first half of this year (until June 7 th ), no less than 2,500 words were added, most often in connection with the COVID-19 pandemic (nearly 600 words). 4. We know practically nothing about the C20 contributors, except for their name or nickname and IP address, as the contribution process is entirely anonymous. In this regard, we find the following studies inspiring: Mü ller-Spitzer, Wolfer and Koplenig 2015; Wolfer and Mü ller-Spitzer 2016; Skö ldberg and Wenner 2020. 5. Cf. a recently published popular book about lexicography (Li sková and Semelík 2019). 6. http://neologismy.cz/ 7. In fact, there are two Online corpora: 1) ONLINE_NOW (Cvr cek and Prochá zka 2020a) contains data from the current month and six previous months; it is updated daily, and 2) ONLINE_ARCHIVE (Cvr cek and Prochá zka 2020b) contains data from February 2017 to the month in which ONLINE_NOW begins; it is always updated at the beginning of the month. The corpora do not overlap in the covered periods, so for a search in the entire time range, it is enough to merge the query results from both corpora (as we did in our case studies) and further manual adjustments to remove the overlap are not necessary. 8. Born 1936Born , died 2011the president of Czechoslovakia 1989the president of Czechoslovakia -1992; the president of the Czech Republic 1993-2003. A general view of the frequency of the collocation | 9,395.4 | 2021-02-14T00:00:00.000 | [
"Linguistics"
] |
New Insight into Post-seismic Landslide Evolution Processes in the Tropics
Earthquakes do not only trigger landslides in co-seismic phases but also elevate post-seismic landslide susceptibility either by causing a strength reduction in hillslope materials or by producing co-seismic landslide deposits, which are prone to further remobilization under the external forces generated by subsequent rainfall events. However, we still have limited observations regarding the post-seismic landslide processes. And, the examined cases are rarely representative of tropical conditions where the precipitation regime is strong and persistent. Therefore, in this study, we introduce three new sets of multi-temporal landslide inventories associated with subsets of the areas affected by 1) 2016 Reuleuet (Indonesia, Mw = 6.5), 2) 2018 Porgera (Papua New Guinea, Mw = 7.5) and 3) 2012 Sulawesi (Indonesia, Mw = 6.3), 2017 Kasiguncu (Indonesia, Mw = 6.6) and 2018 Palu (Indonesia, Mw = 7.5) earthquakes. Overall, our findings show that the landslide susceptibility level associated with the occurrences of new landslides return to pre-seismic conditions in less than a year in the study areas under consideration. We stress that these observations might not be representative of the entire area affected by these earthquakes but the areal boundaries of our study areas.
INTRODUCTION
Based on the number of casualties, earthquakes and precipitation are the most common landslide triggers (Petley, 2012) and near-real-time global landslide susceptibility assessment methods are separately available for both earthquake-(e.g., Nowicki Jessee et al., 2018;Tanyaş et al., 2019) and rainfall-triggered (Kirschbaum and Stanley, 2018) landslides. However, none of these statistically based methods are capable of accounting for the coupled effect of earthquakes and precipitation. Nevertheless, characterizing these interactions is critical to advance effective landslide susceptibility assessment because various studies show that the combined effect of earthquakes and rainfall could increase landslide susceptibility (e.g., Sassa et al., 2007;Saemundsson et al., 2018;Wistuba et al., 2018;Bontemps et al., 2020;Chen et al., 2020a). Specifically, earthquakes are recognized as an important predisposing factor increasing postseismic landslide susceptibility either by disturbing the strength and/or geometry of hill slope materials or by producing co-seismic landslide deposits, which are prone to instabilities mostly due to subsequent rainfall events (e.g., Lin et al., 2004;Parker et al., 2015;Tanyaş et al., 2021).
Either way, the seismic effect can cause a reduction in rainfall thresholds in post-seismic periods (e.g., Liu et al., 2008, Liu et al., 2021Tanyaş et al., 2021).
To capture the preconditioning effect of seismic shaking for a rainfall-triggered landslide susceptibility assessment, we first need to understand the evolution of landslides in post-seismic periods.
In the geoscientific literature, the post-seismic landslide evolution is examined on the basis of the temporal variation of several parameters such as landslide rate (km 2 /year, in Barth et al., 2020), landslide density (m 2 /km 2 , in Marc et al., 2019), climate normalized landslide rate (Marc et al., 2015), number of landslides (Saba et al., 2010), total landslide area (Shafique, 2020) and cumulative landslide area/volume (Fan et al., 2018). The timespan of the post-seismic period required to restore a given area to pre-seismic landslide susceptibility levels is called landslide recovery time (e.g., Marc et al., 2015;Kincey et al., 2021). And, it is mostly identified using one of the parameters listed above.
Various factors can be interchangeably and/or simultaneously used to explain the mechanisms behind landslide recovery time. Positive correlations between landslide recovery time and various factors such as the amount of co-seismic landslide deposits (e.g., Chen et al., 2020b;Tian et al., 2020;Yunus et al., 2020), the intensity of seismicity in terms of both mainshocks and aftershocks (Fan et al., 2018;Tian et al., 2020) or revegetation rate (e.g., Chen et al., 2020;Xiong et al., 2020;Yunus et al., 2020) are emphasized in the literature. However, there is no agreement in the geoscientific community on the actual meaning of the term landslide recovery. On one hand, some geoscientists define the recovery as a mechanical healing process where the strength of hill slope material is restored (e.g., Marc et al., 2015). On the other hand, others argue that healing on strength of hill slope materials is not possible through natural processes under low pressure and temperature conditions (e.g., Parker et al., 2015).
Nevertheless, the agreement reported above within the geoscientific community leaves room for an equal amount of disagreements on the duration of the recovery. In fact, even for the same earthquake, there are different observations regarding the time through which the elevated landslide susceptibility persists in post-seismic periods. For instance, Shafique (2020) examines a subset of the area affected by the 2005 Kashmir earthquake from 2004 to 2018 using multi-temporal landslide inventories and indicates that 13 years after the earthquake the level of landslide susceptibility is still larger than the level estimated in pre-seismic conditions. Conversely, Khan et al. (2013) monitored a sample of the hill slopes that failed during the Kashmir earthquake and suggested that the landscape returned to pre-seismic susceptibility level within five years after the earthquake.
In the same way as above, different timespans of elevated landslide susceptibility have also been suggested for other large earthquakes such as Chi-Chi (e.g., Shou et al., 2011;Marc et al., 2015), Wenchuan (e.g., Fan et al., 2018;Chen et al., 2020b) and Gorkha (e.g., Marc et al., 2019;Kincey et al., 2021) earthquakes. Notably, the inconsistency between different observations could be related to the boundaries of examined areas (e.g., Shafique, 2020;Yunus et al., 2020) because the ground shaking level spatially varies, hence its effect varies as well. In other words, the damage produced by ground motion is not homogeneous throughout the area affected by an earthquake. Kincey et al. (2021) elaborate on this issue and refer to both methodological and conceptual issues. They note that the method used to map landslides and, in particular, the data used for the mapping may play a role. They also indicate that post-seismic landslide evolution could be assessed by monitoring new landslides or both new landslides and reactivated co-seismic landslides. In turn, based on the target postseismic landsliding processes, different conclusions regarding the postseismic evolution of landslides could arise.
Taking aside these uncertainties, the actual landslide recovery time could also be different in each earthquake-affected area because of the diversity in environmental conditions (e.g., Kincey et al., 2021). For instance, landslide recovery time could be longer in areas affected by stronger earthquakes (e.g., Fan et al., 2018) and/or stronger and more numerous earthquake aftershocks (Tian et al., 2020). Also, the amount of co-seismic landslide deposits and precipitation patterns could influence the landslide recovery time (e.g., Tian et al., 2020). This shows that different seismic and climatic conditions could shape the general characteristics of post-seismic landslide evolution processes. In this context, new cases reflecting different environmental conditions are essential to better understand the post-seismic processes.
Specifically, new cases from the high-relief mountainous environments where the precipitation rate is high and persistent could provide valuable information regarding landslide recovery time because such conditions could trigger more landslides and allow us to create high-resolution, multi-temporal landslide inventories. However, the literature summarized above shows that post-seismic landslide evolution is rarely examined for fully humid, tropical conditions ( Figure 1). The only case belonging to this climate zone is the 1993 Finisterre earthquake (Marc et al., 2015). Therefore, in this paper, we aim to contribute to the current literature by introducing three new sets of multi-temporal landslide inventories (two sites from Indonesia and one from Papua New Guinea) where the post-seismic periods are governed by strong and persistent precipitation regimes.
The area affected by the Reuleuet earthquake is the first site we examined ( Figure 2). The second area is affected by the Porgera earthquake ( Figure 3). The third site is affected by three earthquakes: the Sulawesi, Kasiguncu and Palu earthquakes ( Figure 4). We should note that the aggregated version of the inventories mapped for the first and the third sites were also examined by Tanyaş et al. (2021) to investigate the legacy of earthquakes as a predisposing factor in susceptibility assessments run for rainfall-induced landslides in post-seismic periods. Specifically, the authors run statistically based multivariate analyses to monitor the contribution of Peak Ground Acceleration (PGA) through time from co-seismic to post-seismic periods. However, landslide recovery time was not elaborated by Tanyaş et al. (2021) as we focus on in this contribution.
To map multitemporal inventories we used PlanetScope (3-5 m), Rapid Eye (5 m) images acquired from Planet Labs (Planet Team, 2018) and high-resolution Google Earth scenes. The details of the satellite images we used are presented in Supplementary Tables S1, S2 and S3 (see Supplementary Material). We systematically examined the satellite images through visual observation, which is the ideal mapping technique reported in the literature (e.g., Xu 2015; Tanyaş et al., 2021). We did not differentiate source and depositional areas of landslides and delineated them as a part of the same polygon.
For each earthquake-affected area, we initially examined all available remotely sensed scenes and choose the largest available cloud-free regions. In turn, all the multitemporal images we used for mapping convey the real landslide distribution over time during pre-and post-seismic periods. Notably, we could not follow a fixed temporal resolution to create the inventories. We mapped as many inventories as the imagery availability allowed ( Table 1). In each inventory, we eliminated landslides that have previously occurred and only include new failures.
The 2012 Reuleuet earthquake occurred along a strike-slip fault and it triggered only 60 co-seismic landslides over a scanned area of 1356 km 2 ( Figure 2). We created one landslide inventory associated with pre-seismic conditions, a co-seismic landslide inventory and three post-seismic ones (Table 1). Intermediate, basic volcanic and mixed sedimentary rocks are the dominant lithologic units (Sayre et al., 2014) in which landslides are triggered. Based on our interpretation, the co-seismic failures are primarily characterized by shallow translational slides (60 landslides, 0.4 km 2 landslide area). The percentage of post-seismic landslides that interact with previously occurred failures is negligible (<1% of the post-seismic landslide population) and no remobilization was observed in the postseismic period. In other words, most post-seismic failures are characterized by new landslides.
As for the 2018 Porgera earthquake, which occurred on a thrust fault, we examined a 491 km 2 window and mapped a co-seismic landslide inventory including 1,168 landslides with a total surface of 9.8 km 2 ( Figure 3). Landslides were triggered in basic volcanic and carbonate sedimentary rocks (Sayre et al., 2014). Rock/debris avalanches and translational landslides are observed as part of the co-seismic landslide inventory. We also mapped two pre-seismic and three post-seismic landslide inventories (Table 1). Despite the relatively large deposits of coseismic landslides, we did not observe any connection between postseismic landslides and those within previously occurred deposits or sliding surfaces. In other words, we mapped only new landslides.
The areas affected by the 2012 Sulawesi (strike-slip), 2017 Kasiguncu (normal fault) and 2018 Palu (strike-slip) earthquakes overlap ( Figure 4). We mapped the landslides associated with the three earthquakes over an area of 1078 km 2 . The co-seismic landslide inventories we created for the overlapping area contained 520 (1.2 km 2 ), 386 (0.5 km 2 ) and 725 landslides (2.3 km 2 ), respectively. We also mapped five, seven and three post-seismic landslide inventories for Sulawesi, Kasiguncu and Palu earthquakes, respectively ( Table 1). In each case, we interpret the majority of landslides as shallow slides which were triggered in metamorphic and acid plutonic rocks (Sayre et al., 2014). Also, in each case, post-seismic landslides appeared as new failures regardless of the locations of co-seismic landslides and their deposits. The percentage of the post-seismic landslides that appeared to have interacted with previous failures is less than 5%.
Once the multi-temporal inventories were compiled, we examined the temporal evolution of land sliding based on the changes in both the number of landslides and landslide rates. We calculated the landslide rates as the total landslide area divided by the length of the scanned time window (m 2 /year). We also analyzed the variation in the precipitation regime to evaluate the role of rainfall. We used the Integrated Multi-Satellite Retrievals (IMERG) Final Run product (Huffman et al., 2015), which is available through Giovanni (v.4.32) (Acker and Leptoukh, 2007) online data system. Using this product, we first calculated the mean and standard deviation of daily accumulated precipitation from a 20-years (from January 1, 2000 to March 31, 2020) time series and compared it with variation in landslide occurrences. Second, we created boxplots of daily accumulated precipitation for each time window that we mapped a landslide inventory and again compared it with variation in landslide occurrences.
RESULTS
For the area affected by the Reuleuet (December 6, 2016) earthquake, we compiled one landslide inventory associated with pre-earthquake conditions, a co-seismic landslide inventory and three post-seismic ones ( Table 1). We observed Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 5 the peak landslide rate in our first post-seismic inventory that we created comparing the imageries acquired on December 14, 2016 and March 25, 2017. After the first post-seismic inventory, a strong decline in landslide rates arises toward pre-seismic conditions (Table 1 and Figure 5).
We created the second post-seismic landslide inventory comparing the imageries acquired on March 25, 2017 and February 12, 2018. Precipitation amounts show that during the period that we mapped the second post-seismic inventory, the study area was exposed to more intense rainfall events compared to the pre-seismic period we examined ( Figure 5). Also, the time window we scanned to create both pre-seismic and second postseismic landslide inventories have approximately the same length, which is one year. However, the landslide rates and the number of landslides triggered by rainfall are still at the same level in both phases. This shows that landslide rates that we calculated for the occurrences of new landslides return to pre-seismic levels by February 12, 2018 ( Figure 5). This case shows that the elevated Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 6 landslide susceptibility is only valid until March 25, 2017. Also, we note that the highest daily accumulated precipitation for this four-month time window (i.e., between the Reuleut earthquake and March 25, 2017) is observed soon after the earthquake on January 4, 2017. However, due to the lack of availability of more frequent imagery, we could not create a landslide event inventory for that specific rainfall event.
It is worth noting that the landslide rate of landslides triggered by the Reuleut earthquake provided a rare observation where the co-seismic landslide rate is smaller than their post-seismic counterpart (Tanyaş et al., 2021). The peak landslide rate is mostly introduced by co-seismic landslide events in the literature (e.g., Saba et al., 2010;Fan et al., 2018). However, in this case, the earthquake does not trigger widespread co-seismic Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 landslides although it most likely disturbs hill slope materials and makes them more susceptible. As a result, the subsequent rainfall event causes a higher landslide rate compared to the co-seismic phase (Tanyaş et al., 2021). Regarding the Porgera (February 25, 2018) earthquake, we created two landslide inventories for pre-earthquake conditions, a co-seismic one and three additional post-seismic inventories ( Table 1). We compared two sets of images from February 4, 2018 and March 25, 2018 to map the co-seismic landslides. We observed the peak landslide rate in the co-seismic phase and then all post-seismic inventories gave rates in the same range with preseismic observations (Table 1 and Figure 6). This shows that landslide rates that we calculated for the occurrences of new landslides return to pre-seismic levels by March 25, 2018 ( Figure 6). Within the 50-days gap between the two sets of images we used to create our co-seismic landslide inventory, we noticed two peaks in daily accumulated precipitation on March 12th and 21st. Therefore, those rainfall events may have already triggered some of the post-seismic landslides and our co-seismic inventory may also include post-seismic landslides. However, we do not have landslide inventories capturing those specific rainfall events.
In the third site, affected by three earthquakes (2012 Sulawesi, 2017 Kasiguncu and 2018 Palu earthquakes), we separately compiled co-seismic landslide inventories for each case. Furthermore, we mapped five inventories between the 2012 Sulawesi and 2017 Kasiguncu earthquakes. Similarly, we digitized seven inventories to monitor landslide rates between the 2017 Kasiguncu and 2018 Palu earthquakes. Ultimately, we compiled three additional inventories describing post-seismic conditions with reference to the last (Palu) earthquake (Table 1). Below, we present each earthquake and associated pre-, co-and post-seismic landslide inventories separately.
The inventory featuring the co-seismic landslides triggered by the Sulawesi earthquake (August 18, 2012) lacked the support of pre-earthquake imageries. Moreover, we could not find cloudfree images showing the situation through the entire area until the August 20, 2013. However, we acquired some scenes, (e.g., 17th and August 21, 2012, September 4, 2012 and February 4, 2013) which allowed us to partly but consistently observe pre-and coseismic conditions in a fraction of the study area. Therefore, the peak landslide rate we observed in the first post-seismic inventory (August 20, 2013) likely reflects the presence of some pre-and post-seismic landslides in addition to the co-seismic ones (Figure 7). Nevertheless, the six intra-seismic inventories mapped between the August 20, 2013 and the April 25, 2017 showed significantly lower landslide rates compared to the first post-seismic one. As a result, we can still assume that the August 20, 2013 inventory mostly encompasses co-seismic landslides.
For the Kasiguncu (May 29, 2017) earthquake, we observed another co-seismic landslide peak (Figure 7). We compiled this inventory using images acquired on seventh, 10th and June 26, 2017. Therefore, we can confidently argue that co-seismic landslides cause this peak. We also mapped seven intraseismic landslide inventories before the occurrence of the Palu earthquake. The first two intra-seismic inventories showed relatively higher landslide rates than the rest (Figure 7). These relatively high rates can be linked to extreme precipitation discharged after the Kasiguncu earthquake (please note six rainfall peaks in Figure 7C), although these rates are still in range or lower than the ones before the Kasiguncu earthquake ( Figure 7). Notably, the third post-Kasiguncu inventory (March 8, 2018) highlights a regular or pre-seismic landslide regime which implies that landslide rates that we calculated for the occurrences of new landslides return to pre-seismic levels by March 8, 2018 (Figure 7). For the Palu (September 28, 2018) earthquake (M w 7.5), we also compiled a co-seismic landslide inventory using scenes acquired on second and October 5, 2018. In this case, the associated landslide rate is significantly higher due to the strong shaking with respect to the previous two earthquakes (2012 Sulawesi, Mw 6.3 and 2017 Kasiguncu, Mw 6.6), which took place in the same area (Figure 4). The three post-seismic inventories highlight a rapid decline in landslide rates, although it should be noted that these rates did not align along with the low to very low-rate trends shown in pre-Palu conditions ( Figures 7A,B). Nevertheless, we do not have an adequate series of observations as we have for the Kasiguncu case and because of this, it is not clear whether these low landslide rates imply a return to pre-seismic levels.
DISCUSSION
As noted earlier in the text, in this study we focused on sites where post-seismic landslide processes are mostly governed by occurrences of new landslides in tropics where precipitation is high and persistent. We examined five earthquakes in total and mapped multi-temporal landslide inventories for each of them from pre-to post-seismic phases. Between five earthquakes, the landslide time series we created for Sulawesi and Palu earthquakes, on one hand, did not provide adequate information to cover the entire process of landslide evolution. In the Sulawesi case, we could not map a pre-seismic landslide inventory, whereas in the Palu earthquake our inventories did not cover a period long enough to monitor the entire post-seismic landslide evolution. On the other hand, for three of the examined cases (2012 Reuleut, 2017 Kasiguncu and 2018 Porgera), our multi-temporal inventories showed that after the earthquake the elevated landslide susceptibility levels return to pre-seismic conditions in less than a year.
We stress that these observations may not be representative of the entire area affected by these earthquakes but the areal boundaries of our study areas. This means that for the whole FIGURE 6 | Landslide rates, number of landslides and daily precipitation regarding the examined time windows for the 2018 Porgera earthquakes. Yellow stars show the date of the earthquake. Vertical dashed black lines indicate the dates of the satellite imagery used for mapping. In Panel A, the mean and standard deviation of daily accumulated precipitation of the respective time windows are calculated from a 20-years time series are shown by black and gray lines, respectively. In Panel B, boxplots show minimum, median and maximum precipitation amounts as well as first, third quartiles and outliers.
Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 areas affected by these earthquakes these observations may not valid. However, compared to the similar works in the literature suggesting at least a few years for returning to the pre-seismic susceptibility levels (e.g., Marc et al., 2015;Fan et al., 2018;Kincey et al., 2021), our findings still point out a relatively short period. Among the examined cases, the 2016 Reuleut earthquake is a clear example to discuss the possible factors controlling this relatively short period to return to pre-seismic landslide rates.
The Reuleut earthquake triggered only 60 shallow landslides in the examined area although, within 110 days from the earthquake, we observed 742 new landslides in the same site (Table 1 and Figure 5). This later series of landslides is larger than the common landslide rate in the area. However, from this time onward, the landslide rate recovers to its pre-earthquake pattern ( Figure 5). The limited number of shallow co-seismic landslides implies that there is not much material deposited on In Panels A and C, the mean and standard deviation of daily accumulated precipitation is calculated from a 20-years time series are shown by black and gray lines, respectively. In Panel B, boxplots show minimum, median and maximum precipitation amounts as well as first, third quartiles and outliers.
Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 hill slopes and the remobilization processes through, for instance, debris flows are negligible. This shows that the post-seismic process is governed by occurrences of new landslides and therefore, returning to pre-seismic landslide rates could be relatively quick (e.g., Tian et al., 2020). By discarding the contribution of deposit availability, the most likely explanation for the high landslide susceptibility following the earthquake can be associated with strength reduction in hillslope regolith and/or bedrock caused by ground shaking (e.g., Parker et al., 2015;Fan et al., 2019). In such cases, the post-seismic landsliding processes may be controlled by two mechanisms already postulated in the literature (e.g., Saba et al., 2010;Marc et al., 2015): 1) healing of soil and/or rock mass strength parameters and/or 2) the environmental stress due to the subsequent rainfall discharge.
The healing of soil strength parameters is a proven process under certain circumstances (Lawrence et al., 2009;Fan et al., 2015;Bontemps et al., 2020). Specifically, in tropical landscapes, we can expect relatively fast recovery rates in the vegetation cover, which may play a large role in lateral root reinforcement for shallow landslide mitigation (e.g., Schwarz et al., 2010). However, vegetation recovery is a gradually occurring process and it may take three years even for the fast-growing tree species in the tropics (Dislich and Huth, 2012). For instance, Yunus et al. (2020) examined the relationship between vegetation recovery and landslide rates via Normalized Difference Vegetation Index (NDVI) values and concluded that just based on the established NDVI trend, pre-seismic landslide rates can be obtained within 18 years. Moreover, considering the persistent external stress caused by the precipitation regime in Reuleut, Indonesia (i.e., in the absence of dry season), in such a short postseismic period (i.e., 110 days), healing in soil strength parameters is not likely to take place.
The second alternative refers to the intensity and duration of the post-earthquake rainfall regime. Precipitation may negatively affect disturbed hillslopes that the earthquake has brought to a Factor of Safety (FoS) close to one. However, the rainfall may not be enough to bring the FoS to the brink of actual instability and failure. As a result, regardless of the abovementioned healing processes, postseismic landslide rates might decrease gradually through time or might decline rapidly based on the climatic conditions, particularly based on intensity and persistence of precipitation.
We can further discuss the intensity of landslide triggers, for instance, considering post-seismic landslides following the 2005 Kashmir earthquake. After the first monsoon season following the Kashmir earthquake, Saba et al. (2010) observed only a few landslides despite the heavy precipitation. Our interpretation is in line with theirs, stating that the rainfall intensity might not be enough to trigger further landslides. On the other hand, they also note that another possible reason for the lack of landslides is that all unstable slopes might have already failed by that moment. However, the unstable slope is a relative term and a failure can occur on any slope if there is an access amount of external forces disturbing the stability conditions.
In this context, our newly developed landslide dataset allows us to elaborate on the relativity of the term "unstable slope" and to make a simplified comparison between the intensity of rainfall and earthquake events as triggering agents that exacerbate slope stability conditions. The area affected by three earthquakes (2012Sulawesi, 2017Kasiguncu and 2018 shows that even relatively low-intensity ground shaking might be more effective than intense precipitation at triggering landslides. After the Sulawesi earthquake, the post-seismic landslide rates remain low until the 2017 Kasiguncu earthquake, although several intense rainfall events occurred between 2014 and 2017 ( Figure 7). However, the high landslide rate associated with the 2017 Kasiguncu earthquake occurs despite the relatively weak ground shaking estimates reported by the U.S. Geological Survey, ShakeMap system for the examined area (PGA≈0.08-0.10 g) (Worden and Wald, 2016) ( Figure 8A). This implies that having a limited number of landslides related to rainfall events may not be due to the removal of all unstable slopes or healing on hill slope materials but because of a lack of triggers with sufficient intensity to cause failures on hill slopes, even when some of them have been previously damaged.
This research also provides some findings regarding the argument that the legacy of the previous earthquakes can be valid years after an earthquake occurs (Parker et al., 2015). The Indonesia case where we mapped three co-seismic landslide inventories for the same site shows that there is an increasing trend in the co-seismic landslide rates over time ( Figure 8B). With co-seismic landslides, the intensity of ground shaking is naturally the main factor controlling the landslide rates. In fact, the 2018 Palu earthquake (M w 7.5) caused one of the biggest landslide events observed in this region, though the site was hit by several large earthquakes previously (Watkinson and Hall, 2019). The Palu earthquake created strong ground motions within our study area with PGA values ranging from 0.20 to 0.68 g ( Figure 8A). Therefore, the peak landslide rate related to the Palu earthquake is a natural consequence of such a large earthquake. On the other hand, within the same study area, the severity of ground shaking related to the 2017 Kasiguncu earthquake (PGA≈0.08-0.10 g) was relatively lower than the 2012 Sulawesi earthquake (PGA≈0.08-0.26 g). The level of ground shaking caused by the Kasiguncu earthquake is out of the zone in which the large majority of landslides (90% of the total landslide population) are located in most of the earthquake-induced landslide inventories in the literature. Specifically, Tanyaş and Lombardo (2019) identify the 0.12 g contour as the areal boundary of the zone containing at least 90% of the landslides. They also identify 0.05 g as the minimum PGA value triggering landslides. This means that our study area is located in a zone where we do not expect so many failures caused by the Kasiguncu earthquake. However, the Kasiguncu earthquake triggered 382 landslides and the post-seismic landslide rates of Kasiguncu earthquake is relatively higher than the Sulawesi earthquake ( Figure 8B), although there is no significant change in the precipitation regime (Figure 7). The relatively high landslide rates, in this case, might be explained by various factors such as frequency and/or duration of ground shaking (Jibson et al., 2004(Jibson et al., , 2019Jibson and Tanyaş, 2020) and detailed analyses are required to better understand these controlling factors. Yet, among various possible explanations, we can also count the legacy of the Sulawesi earthquake as a factor Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 dictating the higher landslide rate concerning the Kasiguncu earthquake. The variation in the mean (and standard deviation) of landslide rates for these three sets of post-seismic landslide inventories (see gray dots in Figure 8B) also suggests a similar conclusion that the legacy of the previous earthquakes might play a role in the trend of increasing post-seismic landslide rates through time. The accumulated disturbance on hill slope materials might cause a small increase in the average landslide rate of a site. As a result, the background level for the landslide susceptibility might be higher after each earthquake compared to previous earthquakes.
CONCLUSION
In this work, we examined the temporal evolution of landslides during post-seismic periods in which the combined effect of earthquakes and rainfall causes a particularly elevated landside susceptibility. Specifically, we examined some cases where rainfall acts as the main landslide trigger and seismicity plays the role of a predisposing factor. We focused on earthquakes that occurred in fully humid, tropical conditions because of two reasons. First, post-seismic landslide processes have been rarely investigated in these settings. Therefore, providing a new dataset belonging to rarely examined conditions could provide valuable information to better understand the post-seismic processes, which are mainly governed by site-specific environmental factors (e.g., seismicity, climate, etc.) (e.g., Tian et al., 2020). The second reason is due to the high and persistent precipitation regimes typical of tropical environments. In fact, these settings provide the perfect conditions for continuous genesis of slope failures, making it possible to obtain high spatial and temporal resolution time series of landslide inventories. The average temporal resolutions of our inventories are approximately eight, seven and five months for the areas affected by Reuleut, Porgera and Palu earthquakes, respectively ( Table 1).
We observed that landslide susceptibility levels associated with the occurrences of new landslides return to pre-seismic conditions in less than a year, for the environmental settings under consideration. This implies that the elevated landslide susceptibility could disappear rapidly if the area is exposed to strong and persistent rainfall discharges. However, this does not mean that prolonged and strong precipitation regimes always bring a rapid decline in elevated landslide susceptibility. Site-specific characteristics of a study area such as seismotectonic, morphologic, geologic and climatic conditions, as well as sediment budget associated with co-seismic landslide events, govern the evolution of post-seismic periods. In this context, the possible roles of these factors need to be examined by further analyses.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, the inventories we mapped for this study are shared through NASA Landslide Viewer (https://landslides.nasa.gov). Further inquiries can be directed to the corresponding author. Frontiers in Earth Science | www.frontiersin.org July 2021 | Volume 9 | Article 700546 | 7,354.2 | 2021-03-05T00:00:00.000 | [
"Geology"
] |
Image shift due to atmospheric refraction: prediction by numerical weather modeling and machine learning
Abstract. We develop and study two approaches for the prediction of optical refraction effects in the lower atmosphere. Refraction can cause apparent displacement or distortion of targets when viewed by imaging systems or produce steering when propagating laser beams. Low-cost, time-lapse camera systems were deployed at two locations in New Mexico to measure image displacements of mountain ridge targets due to atmospheric refraction as a function of time. Measurements for selected days were compared with image displacement predictions provided by (1) a ray-tracing evaluation of numerical weather prediction data and (2) a machine learning algorithm with measured meteorological values as inputs. The model approaches are described and the target displacement prediction results for both were found to be consistent with the field imagery in overall amplitude and phase. However, short time variations in the experimental results were not captured by the predictions where sampling limitations and uncaptured localized events were factors.
Introduction
The Earth's atmosphere includes several phenomena that affect the propagation of light. While scattering and absorption by clouds, fogs, and aerosols primarily affect the intensity of the radiation received, the atmosphere can also affect the spatial resolution properties and the propagation trajectory of light. For example, atmospheric turbulence causes image shimmering and blurring and is stochastic in nature with fluctuations over short timescales (e.g., milliseconds). Another phenomenon is atmospheric refraction where refractive index gradients can steer or bend light rays. The index gradients are associated with changes in air density, which for optical wavelengths is primarily a function of air temperature gradients. Atmospheric refraction tends to cause more deterministic, larger-scale effects than turbulence and the effects can persist from minutes to hours. [1][2][3][4] The interest here is refraction in the lower atmosphere, which can cause apparent displacement or distortion of objects when viewed by imaging systems or produce steering when propagating laser beams.
For several years, we have been developing a low-cost, mobile camera system to study atmospheric refraction at New Mexico State University. One system was recently deployed at White Sands Missile Range (WSMR), New Mexico (NM), and a second system was set up at the Jornada Experimental Range (JER), near Las Cruces, NM. Both systems collect time-lapse images of distant natural targets, such as mountain ridges. A time-lapse system was previously used in Las Cruces, New Mexico, with a building as a target to study diurnal image displacement due to refraction. 5 A similar system in Dayton, Ohio, was used by Basu et al. 4 to investigate the temporal variations of the refractive index gradient. Time-lapse imagery has also been used to investigate the apparent stretch and compression of objects due to atmospheric refraction lensing effects 6 and the approach has also been applied to the estimation of turbulence strength. 7 The prediction of atmospheric refraction effects can be advantageous for many terrestrial optical applications where prior knowledge of the light's trajectory can improve the speed and accuracy of pointing and tracking functions. The goal of this paper is to develop and evaluate two different methods, numerical weather prediction (NWP) and machine learning (ML), for predicting image displacement due to atmospheric refraction. NWP is an attractive approach for our application as it is deeply rooted in physics. However, it is computationally expensive, and the results are subject to initial conditions and terrain characteristics. An alternative, more empirical tactic is to apply an ML algorithm to build a predictive model based on local meteorological data. In this paper, we describe our application of NWP and ML methods to image displacement due to refraction and compare the results with time-lapse camera measurements.
Time-Lapse Image Collection and Processing
During January and February of 2018, we collected image data with a time-lapse camera located at WSMR that was pointed generally north at a natural desert landscape and a mountain range (Oscura Mountains) on the horizon. Another camera was set up at JER and was pointed west to image a mountain range (Dona Ana Mountains) and a desert valley. This system began collecting images in May 2018 and is still operating. The mountain ridgelines observed were at distances of about 20 km for JER and over 100 km for WSMR.
The battery-powered camera systems are easily transportable and consist of a weatherproof case on a tripod that contains a Nikon D5200 camera operated in a time-lapse mode. A zoom lens is set at its maximum focal length of 400 mm for the WSMR measurements and at 300 mm for the JER observations. The camera is typically programmed to collect images in 5-min intervals with a fixed 5.6 f-number and automatic shutter speed. Example frames of the mountain targets and valleys for the WSMR and JER experiments are shown in Fig. 1. The rectangles indicate areas in the images that were cropped and used for the refraction analysis in this paper.
Local weather has a significant effect on the vertical temperature gradients that are primarily responsible for the atmospheric refraction effects. The weather variables of interest in our study include temperature, humidity, pressure, and solar radiation. For the WSMR experiment, online metrological data were downloaded from a weather station near the target mountain. A Davis Vantage Vue brand weather station next to the camera was utilized for the JER experiment. These measurements are interpolated in time to align with the time-lapse image frames. The Kanade-Lucas-Tomasi point-tracking algorithm 8,9 is implemented to measure the apparent motion of the mountain ridges in the images. 10 The general steps of the data-processing approach are depicted in Fig. 2. An area containing the far-field target in each frame is cropped and the N "best" features in the subframe are determined by a threshold setting. The results are stored in a feature list in descending order of "goodness." Then, the algorithm tracks these features in consecutive frames. A near-field object reference close to the camera is also selected and point-tracking of this feature is applied in the analysis to remove shifts in the far-field images that are due to camera platform motion. Figure 3(a) shows some selected points associated with the cropped image of the mountain peak in the JER experiment, and Fig. 3(b) shows the positions of these points in the next consecutive frame. The point-tracking algorithm on average selects the same points in each frame. The vertical positions of the multiple points are averaged to give a measurement of the ridge position.
After point-tracking, the near-field average pixel coordinates are subtracted from the far-field coordinates frame by frame to obtain the apparent position of the far-field target. Displacement of the target's apparent position from frame to frame is attributed to changes in atmospheric refraction. The most significant shifts are found to occur in the vertical direction.
Numerical Weather Prediction and Ray Tracing
NWP is a discipline where governing equations and parameterizations that describe fluid flow and other physical processes are applied to current (or previous) weather conditions to provide a future forecast. For our purposes, the model results can be used to predict the vertical structure of the refractive index in the atmosphere. However, an extension of the established models is required to provide higher spatial resolution along our paths of interest. 11,12 In this section, we describe our approach for using refractive index gradient data generated by NWP and the application of ray tracing to determine corresponding image shifts. The results of the approach are compared with time-lapse measurements in Sec. 3. The numerical weather model (called the Weather Research and Forecasting -WRF model) uses initial and boundary conditions from a global-scale reanalysis data set (called ERA-5) and topographical effects to generate the refractive gradient data for a particular location and time range corresponding to our field measurements. 12 Figure 4 illustrates the refractive index gradient ½dnðhÞ∕dh model results for the WSMR experiment time-lapse imaging path on the morning of February 5, 2018, where nðhÞ is the vertical profile of refraction index as a function of height h. The gradient values are presented as a function of altitude relative to mean sea level and a function of distance along the propagation path. In Fig. 4, the camera site is on the left and the mountain ridge is on the right. The camera site is on the left and the mountain ridge is to the right. The target corresponding to this result is the rectangular area indicated at the lower right side of the mountain ridge in Fig. 1(b). In fact, the actual mountain peak in Fig. 1(b) is a narrow ridge that is not adequately resolved by the current NWP model spatial resolution (∼1 km). Thus, NWP results are not available for the mountain peak. However, the model result shown in Fig. 5 allows us to examine the shift of a lower portion of the mountain ridge at about 1400 m in elevation where the viewing path is nearly horizontal across the basin.
Ray tracing through the gradient profiles is used to determine the image displacements predicted by the model. Ray-tracing techniques are often applied for refraction analysis over near-ground horizontal paths assuming diffraction effects are not significant. 3,13 The NWP data essentially consist of "blocks" of constant refractive index gradient, as indicated in Figs. 4 and 5. Rather than using a conventional linear ray-tracing algorithm that requires subsampling the blocks to provide accurate trajectories, we apply a second-order ray tracer 14 where the linear ray transfer equation is expanded with a quadratic correction term to model the curved ray trajectory within each block. This approach requires only one tracing step for each data block and is significantly faster than a linear ray trace approach with subsampling. A summary of this method is now presented.
Consider a two-dimensional form of the eikonal equation describing the ray trajectory in an inhomogeneous media: 15 (1) where h is the height and x is the horizontal distance. This expression assumes horizontal paraxial propagation, and the refractive index gradient is vertical. Each block in the NWP data has a constant vertical gradient value indicated by κ ¼ dn∕dh and, assuming nðhÞ ≈ 1, the solution to Eq. (1) for the ray vertical position within a block can be given as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 6 5 1 hðxÞ ¼ where θ 0 and h 0 are the initial ray angle and height, respectively. Taking the derivative of Eq. (2) with respect to x gives the ray trajectory angle as a function of distance, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 5 8 4 With respect to the block boundaries in the NWP index gradient data, Eqs. (2) and (3) are used in succession to transfer the ray height between one boundary and the next and then provide the bending of the ray angle for the next block. Iteratively, the equations become: and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 4 6 1 where Δx is the distance between adjacent blocks (∼1 km for our NWP data) and j is an index that identifies the different blocks. Example ray trajectories generated by the tracing approach are illustrated in Figs. 4 and 5. Rays (200 rays for WSMR and 100 rays for JER) are launched from the image target (mountain ridge elevation of ∼2200 m for WSMR and the basin edge elevation of ∼1415 m for JER) for a range of initial angles (−0.1 to −8 mrad for WSMR and −1 to þ1 mrad for JER). The ray trajectories are traced through the model gradients until they reach the ground near the camera. The specific ray that strikes the ground at the camera location is identified and a line is backprojected at the ray arrival angle. The height of this line at the target plane indicates the apparent target position as seen by the camera. The apparent positions are calculated for successive model frames and the relative shifts are determined. For the results presented here, NWP results were computed for 10-min intervals, and the ray-tracing procedure was applied. We note that it is also possible to trace the rays from the camera position toward the target position.
Machine Learning Predictions
In this section, we describe an ML approach to predict image displacement due to atmospheric refraction based on a set of measured metrological values. The input variables we use for prediction are temperature (T), humidity (H), pressure (P), and solar radiation (S). The predicted output is the image displacement (ŷ) due to refraction. Other available meteorological data, such as wind speed, were also applied as test inputs to the ML model, but we found that these alternative parameters had little influence on the prediction results. The weather station for the JER experiment provides measurements of these variables at 15-min intervals. In addition, we utilize other local measurements available online at hourly intervals. Prior to input to the algorithm, the measurement values are normalized to the range of (0,1) by dividing each value by the maximum value observed. Our ML prediction process follows the conventional approach of splitting the image displacement and meteorological data into three sets: training, validation, and testing. Our prediction is a comparison of the ML results with the testing data set.
The ML approach is based on linear regression and we assumed a model of the form: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 7 2 3ŷ ðT; H; P; S; wÞ ¼ w 1 þ w 2 T þ w 3 T 2 þ w 4 H þ w 5 H 2 þ w 6 P þ w 7 P 2 þ w 8 S þ w 9 S 2 þ w 10 TH þ w 11 TP þ w 12 TS þ w 13 HP þ w 14 HS þ w 15 PS; (6) where w ¼ ½w 1 : : : : : : : : : : : : : : : w 15 T are the coefficient weights, and linear, square, and pairwise products of the meteorological parameters are used as nonlinear kernel functions. The choice of kernel functions was based on trial-and-error experimentation. The coefficient values w are determined by fitting Eq. (6) to the training data by minimizing an error function that measures the misfit betweenŷ and the measured values y ¼ ½y 1 : : : : : : : : : : : : y N T as a function of w. Our choice of error function is the regularized squared error: 16,17 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 6 0 0 ½ŷðT n ; H n ; P n ; S n ; wÞ − y n 2 þ λ 2 kwk 2 : The term ðλ∕2Þkwk 2 is a penalty (regularization) term to avoid overfitting where the parameter λ is an input to the model that governs the relative importance of the regularization term compared with the squared error term. Given N data points ðT n ; H n ; P n ; S n ; y n Þ, the coefficients w that minimize the cost function in Eq. (7) are obtained in closed form by differentiating EðwÞ with respect to w, setting the result to zero, and solving for w. This produces the following well-known result: 17 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 4 7 8 where I is the identity matrix and E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 4 3 4 As shown in Eq. (9), the input observations are arranged as row vectors. The algorithm steps involve constructing the matrix Φ, obtaining the vector y; and applying Eq. (8) to compute the weights w. Referring to Eq. (7), in order to generate a predictive model that generalizes well for new data input, the selection of the value of λ needs to strike a balance between overfitting and underfitting of the training data. A larger value of λ tends toward a "simpler" fit but there is more likelihood of underfitting the data where some of the dominant trends are not captured. On the other hand, a low value of λ provides a more "complex" fit but there is more likelihood of overfitting the data where specific noise events are captured as trends. We use a tuning approach to determine the value of λ where a search over a range of values is performed. For each λ trial, the model is first fit to the training data set. This model realization is next applied to the validation data, and finally the mean squared error (MSE) between the model and measured validation data displacement result is calculated. The value of λ that provides the lowest MSE is selected and used for the next step of prediction comparisons using the testing data.
Finally, we mention that for the JER experiment, the model form in Eq. (6) was used but we added a fifth binary variable (D) that takes either a value of 1 if the sky is "clear" or 0 if it is "cloudy." The value of D is determined from a visual analysis of the sky conditions in the timelapse imagery. If the sky appears to be cloudy in more than 50% of the frames during the daytime hours, then D is set to 0, otherwise it is set to 1. Although this is an unrefined measurement and a simple binary parameter, we found the approach improved the accuracy of the model for varying sky conditions.
Results
NWP results were computed for 10-min intervals and the ray-tracing procedure was applied to determine the target shifts. The shift results were interpolated in time to align with the time-lapse image frames. Figure 6 shows comparative results for the WSMR experiment on February 5, 2018, as a function of time-of-day where the red curve is the apparent shift (in radians) of the mountain ridge as derived from ray tracing the NWP data. The black curve is the predicted shift by the ML algorithm where, prior to the date shown, we trained the algorithm on 4 days of displacement and meteorological data (500 data points) and tuned the result on 2 days of validation data. The search range for λ was (0, 40) and, for these results, λ was found to be 2. The blue curve is the shift measured in the actual camera frames. Measurements are only available for the daytime hours as the ridge is not clearly distinguishable by the camera at night.
The general downward drift throughout the daytime hours, as shown in Fig. 6, is an effect we commonly observe in clear weather and this corresponds to a slow reduction in the average refractive index gradient of the atmosphere along our line of sight. The NWP ray trace and the ML model results agree well in amplitude and phase with the image shift results from the camera measurements, although there are differences in the short-time variations. By manually evaluating some of the time-lapse images, we verified that the point tracking results appear to be accurate, and so we believe the short-time variations are due to the atmosphere. These excursions probably represent turbulent fluctuations in the refractive index that are not completely captured by the NWP and it is not possible to predict these short time variations by the ML model due to the differences in sampling where the time intervals for the meteorological data (1 h) are much longer than the time-lapse intervals (5 min). The meteorological measurements were also not collected directly in the imaging path, which could contribute to further differences in the measurements and ML results. Figure 7 presents example shift results for the lower part of the ridge in the JER experiment on July 18, 2018. Details of the data-processing approach are the same as described for the WSMR results (Fig. 6). For this result, the training and validation data sets consisted of 2 days each and λ was found to be 0.01. Like the WSMR results, the NWP ray trace and ML predictions in this case demonstrate the same overall amplitude and phase behavior as the time-lapse measurements, but the short time variations are not captured. Figure 8 shows the measurement and ML prediction results for the mountain peak in the JER experiment [ Fig. 1(b)] over a period of 6 days from January 20, 2019 to January 25, 2019. As discussed in Sec. 2.2, NWP results are not available for this peak target. The 4-day training set in this case included different weather conditions (sunny, very cloudy, and mixtures of sunny and cloudy conditions). The value of λ was found to be 5. The weather was sunny and clear for all the days shown except for January 22 when the sky was cloudy. The results show that the ML model prediction for different weather conditions and over different path angles is consistent with the general trends of the measurements. We note that the cloudy-day result shows less of the daily downward progression of the peak position. Note that the overall shifts for these JER mountain peak results are significantly smaller than the results for WSMR (Fig. 6). This is likely because the WSMR ridgeline is much further from the camera (>100 km) than the ridgeline for JER (∼20 km).
We end this section with a few comments about the range of values found for the regularization parameter λ. Generally, we expect a smaller value of λ (less regularization) to be found when the short time variation amplitudes are relatively smaller than the mean trend deviations within the training and validation data. However, the value of λ selected through our tuning approach appears to be sensitive to other factors such as the shape of the mean trend curve. Also, because we trained and validated the model on relatively small data sets, even adding a few days of data or including a few unusual shift results can affect the λ value. It is important to note that although the tuning process identifies a particular λ value, we found through additional testing that there is a range that typically produces nearly the same prediction result. For example, similar ML results to that shown in Fig. 8 can be produced with λ values ranging from 0.001 to about 10.
Conclusions
We found that NWP along with a ray-tracing technique can be used for prediction of image displacement effects due to atmospheric refraction. The ray-tracing approach that we applied to determine the effect of the gradients produced by the model was straightforward to implement and provided credible results. The model results of target displacement were consistent with field imagery in amplitude and overall trend. However, the model could not predict some of the short time variations in the field measurements, which may be due to localized events that are not completely captured by the numerical model grid or simulation process. As an alternative to NWP, we explored the use of an ML algorithm to build a predictive model based on meteorological data collected near the camera location. We found that our ML model was successful in making predictions over different weather conditions. Similar to the NWP result, the ML model prediction could not follow the short time excursions of the field results. In this case, the slow sample rate of the meteorological data compared to the time-lapse image frame rate is an aggravating factor. We are now working to determine if the ML approach can be extended to encompass different seasons as well as different weather conditions. We also are investigating our ability to apply image point-tracking and ML prediction to detect geometrical distortions of the target image rather than just the apparent target shift. | 5,742.8 | 2020-03-10T00:00:00.000 | [
"Environmental Science",
"Physics",
"Computer Science"
] |
Pre-frail older adults show improved cognition with StayFitLonger computerized home–based training: a randomized controlled trial
Multidomain interventions have shown tremendous potential for improving cognition in older adults. It is unclear if multidomain interventions can be delivered remotely and whether remote intervention is beneficial for older adults who are vulnerable or at risk of cognitive decline. In a 26-week multi-site, home-based, double-blind, randomized controlled trial, 120 cognitively healthy older adults (75 robust, 45 pre-frail; age range = 60–94) recruited from Switzerland, Canada, and Belgium were randomized to receive either the StayFitLonger (SFL) computerized multidomain training program or an active control intervention. Delivered on tablets, the SFL intervention combined adapted physical exercises (strength, balance, and mobility), cognitive training (divided attention, problem solving, and memory), opportunities for social and contributive interactions, and psychoeducation. The active control intervention provided basic mobilization exercises and access to video games. Cognitive outcomes were global cognition (Z-scores of attention, verbal fluency, and episodic memory for nondemented older adults; ZAVEN), memory, executive function, and processing speed. Linear mixed model analyses indicated improved performance on the ZAVEN global cognition score in the SFL group but not in the active control group. Stratified analyses by frailty status revealed improved ZAVEN global cognition and processing speed scores following SFL in the pre-frail group but not in the robust group. Overall, the study indicates that a computerized program providing a multidomain intervention at home can improve cognition in older adults. Importantly, pre-frail individuals, who are at higher risk of cognitive decline, seem to benefit more from the intervention. Trial registration: ClinicalTrials.gov, NCT037519 Registered on January 22, 2020—Retrospectively registered, https://clinicaltrials.gov/ct2/show/NCT04237519. Supplementary Information The online version contains supplementary material available at 10.1007/s11357-022-00674-5.
Introduction
Age-related cognitive decline is associated with modifiable risk factors that can be addressed with non-pharmacological approaches [1]. For this reason, multidomain prevention programs that target a subset of modifiable factors have been developed to promote cognitive health in older adults [2]. The positive impact of multidomain interventions has been observed in a few studies that evaluated their effect in older adults at risk of cognitive decline. For example, the FINGER study, which combined face-to-face physical activity with computerized cognitive training [3], reported a positive effect on overall cognition and a reduced risk of cognitive decline. Thus, prevention programs have enormous potential to protect older adults from the deleterious effects of brain aging on cognition, which can ultimately preserve their independence [2,4].
While prior studies reported encouraging effects, some issues remain to be addressed. The first relates to the accessibility and flexibility of face-to-face multidomain interventions. Older adults may have mobility challenges or live in remote areas without access to community resources providing face-to-face interventions. With the increase in technological literacy among older adults, there has been considerable recent interest in developing computerized programs to deliver home-based interventions. These interventions can increase flexibility of use, reduce costs, and thus facilitate the scaling up of interventions. Computerized programs allow for real-time feedback on performance, control of item timing, and gamification, among other advantages. Surprisingly, only a few studies have evaluated at-home physical activity training or multidomain programs [5][6][7][8][9].
A second important issue to address is interindividual variability in response to computerized multidomain interventions. From a personalized medicine perspective, it is important to know the responders and their characteristics. In the present study, we examined efficacy as a function of frailty status-defined as a state of heightened vulnerability due to impairment of multiple systems [10,11]. Frailty is an important predictor of loss of independence and cognitive decline and thus, a highly relevant marker of vulnerability in old age [12]. Predictions is based on two frameworks: The compensation/reserve model posits that vulnerable older adults will benefit the most from these interventions, while the magnification model posits that cognitive improvement following an intervention involves brain plasticity and that the fittest individuals will benefit most because their brain is more plastic [13,14].
Here, we report on a 26-week double-blind parallel-group randomized controlled trial (RCT), which examined the cognitive effects of the homebased computerized multidomain intervention StayFitLonger (SFL), combining physical exercise and cognitive training, compared to an active control condition. Results are reported for the full sample and then separately for pre-frail and robust older adults. We hypothesized that the SFL group would have a larger pre-post intervention effect than the control group. As the compensation model has been most often supported, we predicted a larger SFL advantage in pre-frail participants compared to robust ones.
Methods
The study was pre-registered (ClinicalTrials.gov Identifier: NCT04237519) and follows the recommendations of the updated Consolidated Standards of Reporting Statement [15,16]. All procedures were reviewed and approved by the Research Ethics Board (REB) in each country: Switzerland: REB Canton de Vaud (application #2018-01898, last approval December 4 2018); Canada: REB vieillissement-neuroimagerie of the CIUSSS-CSMTL (application #18-19-29, last approval December 14 2018); Belgium: REB Cliniques Universitaires Saint-Luc, UCLouvain, Bruxelles (application #B403201941535, last approval October 15 2019). The nature, benefits, and risks of the study were explained to all subjects, and their written informed consent was obtained prior to participation. The cognitive outcomes reported here were identified as secondary outcomes. The primary outcome and secondary psychosocial outcomes are reported separately. As the protocol of the SFL study was published previously [17], only the main aspects of the methods are described.
Design
The efficacy trial was a 26-week double-blind parallel group multi-centric RCT. Participants were randomized to either the SFL home-based computerized multidomain intervention or a home-based active control intervention. Outcome measures were collected at pre-training (PRE; within 6 weeks prior to the start of the intervention) and post-training (POST, within 4 weeks following the end of the intervention). Randomization was done independently from the research team with a 1:1 ratio stratified based on the frailty status using REDCap. Participants were blinded to the nature of their intervention (experimental vs comparator), and assessors were blinded to the hypotheses and the participants' assignment. Statistical analyses were blinded to the intervention condition.
Study population and entry criteria
Participants were recruited from three sites: Centre Leenaards de la mémoire -Centre hospitalier universitaire Vaudois (CHUV), Switzerland; Institut universitaire de gériatrie de Montréal of the Centre intégré universitaire de santé et de services sociaux Centre-Sud-de-l'Île-de-Montréal (CIUSSS-CSMTL), Canada; and Brusano and Centre Public d'Action Sociale (CPAS) of Woluwe-Saint-Lambert, Belgium. The participant flow is shown in Fig. 1. Of the 161 participants tested for eligibility, 120 were randomized (64 in Switzerland, 32 in Canada, and 24 in Belgium). Fifty-nine were allocated to the SFL intervention and 61 to the active control. As participants from the Belgian site were included during the COVID-19 pandemic, the introductory courses, which were provided in group sessions in the other sites [17], were provided to participants through videos followed by a home visit from the instructor.
Included participants were fluent French-speaking community-dwelling adults aged 60 years and over with normal scores on the 4-Instrumental Activities of Daily Living (4-IADL) scale [18], a score ≥ 26 on the Montreal Cognitive Assessment (MoCA) [19], a score < 3 on the Fried's frailty index [11], no motor or vision problems, no current neurological or psychiatric diagnoses (e.g., Parkinson's disease), and access to a wireless Internet connection at home. Participants were identified as either robust (score of 0) or pre-frail (score of 1 or 2) based on Fried's index.
Interventions
Interventions were provided on a tablet (Samsung Galaxy Tab S2) and took place at home. Participants received occasional home visits and monthly phone calls to monitor their use and address any problems with the program. The mean overall time (in hours) that each group spent using the program was recorded and will be only briefly summarized here as it will be the topic of a separate publication on adherence (see design paper [17]).
SFL intervention
The SFL intervention included physical and cognitive training activities. The physical exercises (Exercise) focused on strength, balance, and mobility with various difficulty levels [20]. Cognitive training included activities for divided attention [21], problem solving [22,23], and memory [24]. To increase adherence and social interactions, participants had access to a moderated Chat Room, the possibility to create material for the activities, psycho-educational content, and gamification elements (e.g., rewards, leaderboards). A customizable virtual guide provided participants with instructions, reminders, and feedback. Participants were asked to engage in physical exercise at least 3 days per week for 30-45 min and cognitive exercise for at least three 15-min sessions per week.
Vol:. (1234567890) Active control intervention The active control intervention had similar structure, timing, and organization as the SFL program. Physical exercises included advice and tips to stay physically active and exercises to train strength, mobility, and balance of the upper and lower extremities. Unlike the SFL, the active control only had a limited number of physical exercises and did not include interactive videos, personalization, chat rooms, psycho-educational content, or virtual guide. The cognitive activities were commercially available games that did not target specific cognitive processes or strategies [25][26][27][28][29] (e.g., crossword puzzles, Sudoku, maze arcade).
Outcome variables
Global cognition was measured with an adapted version of the ZAVEN composite score [30], which is the averaged z-scores from the delayed free recall of the California Verbal Learning Test (CVLT), delayed recall of the Wechsler Memory Scale-IV logical memory subtest [31], number of correct symbols reported in the Wechsler Adult Intelligence Scale (WAIS)-IV, digit symbol substitution test (DSST) [32], and letter fluency (the letter P at the pre-training and R at the post-training) [33]. An executive function composite score was computed by combining z-scores from the letter fluency test, Trail Making Test (TMT) part B-A (time) [34], interference index of the Victoria Stroop Test [35], and number of omissions on the divided attention subtest of the Test of Attention Performance [36]. A memory composite score was obtained from the delayed free recall score of the CVLT [37,38] and logical memory task. A processing speed composite score was obtained from the TMT part A (time), number of correct answers on the DSST, and the naming condition of the Victoria Stroop Test (time) [39]. Scores were inverted, when necessary, so that larger scores always reflected better performance. The composite scores were computed by standardizing performance on individual tests using the baseline mean and standard deviation (SD) of the entire group. A preliminary internal consistency analysis was conducted to contextualise the measures. This analysis was particularly relevant for the executive function, memory, and processing speed composite scores because they were meant to reflect a single cognitive construct. In contrast, the ZAVEN composite score was developed to diagnose preclinical Alzheimer's disease and intended to cover multiple cognitive domains to provide greater sensitivity to cognitive decline. Because differences in expectations might explain some of the intervention effects, participant's expectations were measured at PRE and POST on a 15-item ad hoc questionnaire on a 7-point Likert scale.
Statistical analyses
The sample size was determined with a Marker Stratified Design 1 considering a dropout rate of about 25% based on prior studies. All statistical tests were twotailed with a p value < 0.05. Groups were compared for demographics and baseline characteristics with t-tests or chi-square analyses. A linear mixed model was used to analyze the intervention effect controlled for age, sex, education, score on MoCA at baseline, and site. The fixed effects were intervention (SFL vs. active control), time (PRE, POST), and their interaction. In the presence of a significant interaction, post hoc comparisons were computed between PRE and POST in each group and mean and confidence intervals were assessed on pre-post change scores. Separate analyses were computed for each outcome. Significant interactions and group differences in favor of the SFL at POST and on change scores were expected if the SFL intervention was more beneficial than the active control. All analyses were first performed with the total sample, followed by separate analyses for pre-frail and robust individuals. A comparison of the clinical and socio-demographic characteristics of participants, who completed vs withdrew from the study, are presented in Supplementary Table 1. To comply with an intention-to-treat (ITT) approach, all randomized participants were included in the model, and the characteristics of participants who withdrew were compared to those remaining in the study (Supplementary Table 1). The effect of sex and other controlled variables on the cognitive outcomes are shown in Supplementary Table 3.
Results
The mean age of the total sample was 71.33 years (range = 60-94; SD = 5.87). The average score on the MoCA was 28.97 (range = 26-30; SD = 1.17). 79/120 of participants were women. Table 1 reports the baseline characteristics of the sample as a function of the intervention condition and frailty status. There were no differences at baseline for sociodemographic or clinical variables between participants in the SFL vs. active control intervention (Supplementary Table 1). Mean time weekly spent using the program was 2.6 (SD = 0.3) and 3.8 (SD = 0.4) hours for the total SFL group and the active control condition, respectively; 2.4 (SD = 0.4) and 3.4 (SD = 0.7) hours for the prefrail SFL group and the active control condition, respectively; and 2.7 (SD = 0.4) and 4.0 (SD = 0.4) hours for the robust SFL group and the active control condition, respectively. Cronbach alpha values for the composite scores were 0.54, 0.73, 0.54, and 0.66 for the ZAVEN, memory, executive, and speed composite scores, respectively. Figure 2 shows results for global cognition ( Fig. 2A), executive function (Fig. 2B), processing speed (Fig. 2C), and memory ( Figure 3 shows the global cognition (Fig. 3A), executive (Fig. 3B), processing speed (Fig. 3C), and memory (Fig. 3D) composite scores for the pre-frail group. An intervention × time interaction was expected to support a larger effect due to the SFL intervention. Figure 4 shows results for the global cognition (Fig. 4A), executive function The analyses on expectations showed no intervention × time interaction in the total sample, robust, or pre-frail groups, indicating that changes in expectations cannot account for the intervention effect (Supplementary Table 2). Regarding the controlled variables, we observed a site effect as participants from the Swiss site performed better than those in the Canadian and Belgian sites (Supplementary Table 3). We also found a sex effect in favor of women for the Zaven global cognition and memory scores.
Discussion
This RCT assessed the efficacy of a computerized multidomain home-based intervention combining physical and cognitive exercises on the cognition of older adults. The ZAVEN global cognition score indicated a significant intervention × time interaction as the cognition of participants improved in the SFL intervention after training, unlike those in the active control condition. This finding is consistent with the FINGER study, which reported positive effects for a 2-year multidomain intervention on global cognition [3]. Unlike the FINGER study, which used usual care as a control, we used an active control condition where participants received physical activity guidelines and access to low-stimulation cognitive games. Furthermore, our study demonstrates a positive effect even though the duration is shorter than that of the FINGER study (6 months versus 2 years) and even though the intervention was provided remotely.
The positive effect found here deviates from the results summarized by Whitfield et al.'s [5] metaanalysis of four RCTs consisting of remotely delivered multidomain interventions, which reported no cognitive improvement. However, Whitfield et al.'s results should be interpreted with caution given the small number of studies. Furthermore, there are important differences between this study and those reviewed by Whitfield et al. One relates to the intervention content as the SFL intervention includes an individualized, progressive physical activity program with numerous illustrative videos, and empirically supported gamified cognitive exercises.
Another important aspect here was to examine effects on pre-frail individuals, who demonstrated a better response to training, which could have increased our ability to detect an intervention effect. Indeed, we found that pre-frail individuals randomized to the SFL intervention improved their global cognition and processing speed scores after the intervention, unlike participants randomized to the active control condition and unlike robust participants enrolled in either intervention. Thus, the effect observed when examining the entire group seems to be largely driven by the pre-frail participants, who showed a stronger response to this multidomain intervention. The effect found on the speed composite score might suggest that the improvement in processing speed drove the improvement in the ZAVEN global cognition score. Note, however, that there were benefits from the intervention when looking at the data from the other cognitive domains, even though these were non-significant. Hence, future research should focus on determining the cognitive domains that benefit most from similar interventions. The observed difference between pre-frail and robust individuals is consistent with the reserve/compensation hypothesis, which posits that vulnerable individuals are more likely to benefit from interventions designed to compensate for their difficulties, weaknesses or disabilities [13,14]. There is some indication from prior studies that interventions may be beneficial to those who need them most, particularly if they are tailored to the characteristics of the target population (e.g., [40]). The physical exercises used here focused on strength and balance with a gradual, self-managed approach tailored to the sedentary older person. Similarly, our cognitive exercises were playful, which may be especially supportive for more vulnerable older adults. This underscores the importance of taking individual differences into account when designing and prescribing multidomain intervention programs.
The study has limitations that should be acknowledged: First, participants for the Belgian site were recruited and tested during the COVID-19 pandemic. Although we observed a site effect, this was due to performance of the Belgian and Canadian sites, being lower than the Swiss site. Thus, there is no indication for an effect specific to the Belgian site and no evidence that it modified the intervention effect. Second, frail individuals were excluded from our sample because we focused on prevention, but it could be interesting to examine whether the program has a positive effect on cognition in frail older adults. The sample size was estimated based on the physical outcome. Two of the composite scores used, executive and global cognition, showed low internal consistency. This indicates that they may reflect more than one cognitive construct. This was expected for the global cognition but not for the executive composite score. While it is an important issue, we did not include data on transfer to real-world daily functioning as our focus here was on cognition. Finally, the use of a purely computer-based intervention requires older adults to be technologically literate, which means that our group was biased toward those with technological skills.
In conclusion, we report positive effects of a multidomain remote intervention on cognition in older adults and propose that pre-frail older adults may benefit most from the program. One other important feature of the study was the use of a computerized program that allowed the intervention to be conducted entirely in the participantʼs home, which has rarely been done in past studies. Using a computerized remote approach has many benefits: It reaches a larger audience than face-to-face interventions, it is cost-effective in the long-term, it increases accessibility and flexibility, and it allows for personalization of the activities. The finding that more vulnerable older adults benefit most from an intervention to reduce cognitive decline provides support for public health interventions that encourage prevention strategies in older adults by specifically targeting a vulnerable population. | 4,471.6 | 2022-10-21T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
M axi M ask and M axi T rack : Two new tools for identifying contaminants in astronomical images using convolutional neural networks
In this work, we propose two convolutional neural network classifiers for detecting contaminants in astronomical images. Once trained, our classifiers are able to identify various contaminants, such as cosmic rays, hot and bad pixels, persistence e ff ects, satellite or plane trails, residual fringe patterns, nebulous features, saturated pixels, di ff raction spikes, and tracking errors in images. They encompass a broad range of ambient conditions, such as seeing, image sampling, detector type, optics, and stellar density. The first classifier, M axi M ask , performs semantic segmentation and generates bad pixel maps for each contaminant, based on the probability that each pixel belongs to a given contaminant class. The second classifier, M axi T rack , classifies entire images and mosaics, by computing the probability for the focal plane to be a ff ected by tracking errors. We gathered training and testing data from real data originating from various modern charged-coupled devices and near-infrared cameras, that are augmented with image simulations. We quantified the performance of both classifiers and show that M axi M ask achieves state-of-the-art performance for the identification of cosmic ray hits. Thanks to a built-in Bayesian update mechanism, both classifiers can be tuned to meet specific science goals in various observational contexts.
Introduction
Catalogs extracted from astronomical images are at the heart of modern observational astrophysics.Minimizing the number of spurious detections in these catalogs has become increasingly important because the noise added by such contaminants can, in many cases, compromise the scientific objectives of a survey.Properly identifying and flagging spurious detections yields substantial scientific gains, but it is complicated by the numerous types of contaminants that pollute images.Some of them stem from the detector electronics (e.g., dead or hot pixels, persistence, saturation), from the optics (diffraction along the optical path, scattered and stray light), from post-processing (e.g., residual fringes), while others are the results of external events (cosmic rays, satellites, tracking errors).The amount of data produced by modern astronomical surveys makes visual inspection impossible in most cases.For this reason, developing fully automated methods to separate contaminants from true astrophysical sources is a critical issue in modern astronomical survey pipelines.
Most current pipelines rely on a fine prior knowledge of their instruments to detect and mask electronic contaminants (e.g., Bosch et al. 2018;Morganson et al. 2018) and to some extent optical contaminants (e.g., Kawanomoto et al. 2016a,b).Cosmic ray hits can be identified by rejecting outliers in the timeline, provided that multiple consecutive exposures are available, by using algorithms sensitive to their peculiar shapes, such as Laplacian edge detection (e.g., LA Cosmic, van Dokkum 2001) or wavelets (e.g., Ordénovic et al. 2008).The Radon transform or the Hough transform have often been used to detect streaks caused by artificial satellites or planes in images (e.g, Vandame 2002; Nir et al. 2018).
In this work, we want to overcome some of the drawbacks of the above mentioned methods.First, the typical data volume produced by modern surveys requires that the software is largely unsupervised and as efficient as possible.Second, we aim to develop a robust and versatile tool for the community at large and therefore want to avoid the pitfall inherent in software that is tailored to a single or a handful of instruments, without compromising on performance.Third, we would like to have a unified tool able to detect many contaminants at once.Finally, we want to assign to each pixel a probability of belonging to a given contaminant class rather than Boolean flags.These constraints lead us to choose machine learning techniques and in particular supervised learning and convolutional neural networks (CNNs).
Supervised learning is a field of machine learning dealing with models that can learn regression or classification tasks based on a data set containing the inputs and the expected outputs.During the learning process, model parameters are adjusted iteratively to improve the predictions made from the input data.The learning procedure itself consists of minimizing a loss function that measures the discrepancy between model predictions and the expected values.Minimization is achieved through stochastic gradient descent.We recommend Ruder (2016) for an overview of gradient descent based optimization algorithms.
Convolutional neural networks (LeCun & Bengio 1995) are particulary well-suited for identifying patterns in images.Unlike previous approaches that would involve hand crafted feature detectors, such as SIFT descriptors (Lowe 1999), CNN models operate directly on pixel data.This is made possible by the use of trainable convolution kernels to detect features in images.Convolution is shift-equivariant, which allows the same features to be detectable at any image location.
CNNs are now widely used in various computer vision tasks, including image classification, that is assigning a label to a whole image (Krizhevsky et al. 2012;Simonyan & Zisserman 2014;Szegedy et al. 2015), and semantic segmentation, that is assigning a label to each pixel (Long et al. 2015;Badrinarayanan et al. 2017;Garcia-Garcia et al. 2017).
In this work, we propose to identify contaminants using both image classification and semantic segmentation.
In the following, we first describe the images that we used and how we built our data sets.Then, we focus on the neural network architecture that we used.Finally, we evaluate the models performance on test sets and on real data.
Data
In this section we describe the data used to train our two neural networks.We distinguish between two types of contaminants: On the one hand, local contaminants, that affects only a fraction of the image at specific locations.This includes cosmic rays, hot columns and lines, dead columns and lines, dead clustered pixels, hot pixels, dead pixels, persistence, satellite trails, residual fringe patterns, "nebulosity", saturated pixels, diffraction spikes, and over scanned pixels.These add up to 12 classes.On the other hand, global contaminants, that affects the whole image, such as tracking errors.
Local contaminant data
For local contaminants, we choose to build training samples by adding defects to uncontaminated images in order to have a ground truth for each contaminant.In this section we first describe the library of astronomical images used for our analysis, then focus on the selection of uncontaminated images, and finally describe the way each contaminant is added.
Library of real astronomical images
In an effort to have the most realistic dataset, we choose to use real data as much as possible and take advantage of the private archive of wide-field images gathered for the COSMIC-DANCE survey (Bouy et al. 2013).The COSMIC-DANCE library offers several advantages.First, it includes images from many past and present optical and near-infrared wide-field cameras.Images cover a broad range of detector types and ground-based observing sites, ensuring that our dataset is representative of most modern astronomical wide-field instruments.Table 1 gives an overview of the properties of the cameras used to build the image database.Second, most problematic exposures featuring tracking/guiding loss, defocusing or strong fringing were already identified by the COSMIC-DANCE pipeline, providing an invaluable sample of real problematic images.
In all cases except for Megacam, DECam, UKIRT and HSC exposures, the raw data and associated calibration frames were downloaded and processed using standard procedures with an updated version of Alambic (Vandame 2002), a software suite developed and optimized for the processing of large multi-chip imagers.In the case of Megacam, the exposures processed and calibrated with the Elixir pipeline were retrieved from the CADC archive (Magnier & Cuillandre 2004).In the case of DECam, the exposures processed with the community pipeline were retrieved from the NOAO public archive (Valdes et al. 2014).UKIRT exposures processed by the Cambridge Astronomical Survey Unit were retrieved from the WFCAM Science Archive.Finally, the HSC raw images were processed using the official HSC pipeline (Bosch et al. 2018).In all cases, a bad pixel map is Dalton et al. (2006).
associated to every individual image.In the case of DECam and HSC, a data quality mask is also associated to each individual image and provides integer-value codes for pixels which are not scientifically useful or suspect, including in particular bad pixels, saturated pixels, cosmic ray hits, satellite tracks, etc.All the images in the following consist of individual exposures and not co-added exposures.
Non-contaminated images
None of the exposures in our library are defect-free.The first step to create the non-contaminated dataset to be used as "reference" images consists in identifying the cleanest possible subset of exposures.CFHT-Megacam (u, r, i, z bands), CTIO-DECam (g, r, i, z, Y bands) and Subaru-HSC (g, r, i, z, y bands) exposures are found to have the best cosmetics and are selected to create the non-contaminated dataset.The defects inevitably present in these images are handled as follows.
First, dead pixels and columns are identified from flatfield images and inpainted using Gaussian interpolation (e.g., Williams et al. 1998).Then, the vast majority of cosmic rays are detected using the Astro-SCRAPPY Python implementation (McCully et al. 2018) of LA Cosmic (van Dokkum 2001) and also inpainted using Gaussian interpolation.Finally, given the high performance of the DECam and HSC pipelines, the corresponding images are perfect candidates for our noncontaminated datasets.These two pipelines not only efficiently detect but also interpolate problematic pixels (in particular saturated pixels, hot and bad pixels, cosmic ray hits).Such interpolations being a feature of several modern pipelines (e.g., various NOAO pipelines, but also the LSST pipeline), we choose to treat these pixels as regular pixels so that the networks are able to work with images originating from such pipelines.
Patches of size 400 × 400 pixels are randomly extracted from the cleaned images.75% of them are used to generate training data and the remaining 25% for test data.
The final non-contaminated dataset includes 50 000 individual images, ensuring that we have a sufficiently diverse and large amount of training data for our experiment.A non-representative training set can severely impact the performance of a CNN and result in significant biases in the classification task.To prevent this, we measure a number of basic properties describing prototypical aspects of ground-based astronomical images to verify that their distributions in the uncontaminated dataset are wide enough and reasonably well sampled.
The measured properties include, for example, the average full-width at half-maximum (FWHM) of point-sources is estimated in each image using PSFEx (Bertin 2013).This allows us to ensure that the training set covers a broad range of ambient (seeing) conditions and point spread functions (PSFs) sampling.Also, the source density (number of sources in the image divided by the physical size of the image) is measured to make sure that our training set encompasses a broad range of source crowding, from sparse cosmological fields to dense, low-galactic latitude stellar fields.
Additionally, the background is modeled in all the images following the method used by SExtractor (Bertin & Arnouts 1996), i.e. using a combination of κ.σ-clipping and mode estimation.The background model provides important parameters such as the standard deviation of the background which is required in most of the data-processing operations that follow.
Cosmic rays (CR)
"Cosmic ray" hits are produced by particles hitting the detector or by the photons resulting from the decay of radioactive atoms near the detector.They appear as bright and sharp patterns with shapes ranging from dots affecting one or two pixels to long wandering tracks commonly referred to as "worm", depending on incidence angle and detector thickness.
We create a library of real CRs using dark frames with long exposure times from the CFH12K, HSC, MegaCam, MOSAIC, and OmegaCam cameras.These cameras comprise both "thick", red-sensitive, deep depletion charged-couple devices (CCDs), more prone to long worms, and thinner, blue-sensitive devices, more prone to unresolved hits.Dark frames are exposures taken with the shutter closed, so that the only contributors to the content of undamaged pixels are the offset, dark current, and CR hits (plus Poisson and readout noise).A mask M of the pixels affected by CR hits in a given dark frame D can therefore easily be generated by applying a simple detection threshold.We conservatively set this threshold to 3σ D above the median value m D of D: (1) Among all the dark images used, a bit more than 900 million cosmic ray pixels are detected after thresholding.Considering that the average footprint area of a cosmic ray hit is 15 pixels, this represents a richly diversified population of about 60 million cosmic ray "objects".
Next we dilate M with a 3 × 3 pixel kernel to create the final M (D) mask.This mask is used both as ground truth for the classifier, and also to generate the final "contaminated" image C by adding CR pixels with rescaled values to the uncontaminated image U: where σ U is the estimated standard deviation of the uncontaminated image background, denotes the element-wise product and k C is a scaling factor empirically set to 1/8.D has been background-subtracted before this operation, using a SExtractor-like background estimation.
A typical CR hit added to an image and its ground truth mask are shown in Fig. 1.
2.1.4.Hot columns and lines, dead columns, lines, and clustered pixels, hot pixels, and dead pixels (HCL, DCL, HP, DP) These contaminants mainly come from electronic defects and the way the detectors are read.They correspond to pixels having a response very different from that of neighbors, either much lower (bad pixels, traps) or much noisier (hot pixels).These blemishes can be found as single pixels, in small clusters, or affecting a large fraction of a column or row.We treat single pixels and clumps, columns, and lines separately, although they may often share a common origin.All these hot or dead pixels added to the uncontaminated images are simulated.The number of these pixels is set as follow.
For columns and lines, a random number of columns and lines is chosen with a uniform distribution over [1,4].Each column or line has a uniform length picked between 30 and the whole image height or width.It has a uniform thickness in [1,3].For punctual pixels, a random fraction of pixels is chosen with a uniform distribution between 0.0002 and 0.0005.Pixels are uniformly distributed over the image.Clustered pixels are given a rectangular or a random convex polygonal shape.The random convex shapes are constrained to have 5 or 6 edges and to fit in 20 × 20 bounding boxes.
A48, page 3 of 24
The values of these pixels are computed as follows.For hot values, a uniformly distributed random base value v is chosen in the interval [15σ U , 100σ U ]. Then hot values are generated according to the normal law N(v, (0.02v) 2 ) so that hot values are randomly distributed over [0.9v, 1.1v].For dead values, one of the following three equiprobable recipes is chosen at random to generate bad pixel values.Either all values are exactly 0. Either values are generated according to the normal law N 0, (0.02σ U ) 2 so that these are close to 0 values but not exactly 0. Either a random base value v is chosen with a uniform distribution in the interval [0.1m U , 0.7m U ], where m U is the median of the uncontaminated image sky background.In this case, dead pixel values are generated using the normal law N v, (0.02v) 2 , so that values fall in the interval [0.9v, 1.1v].
Example of such column and line defaults are shown in Fig. 1.
Persistence (P)
Persistence occurs when overly bright pixels in a previous exposure leave a remnant image in the following exposures.
To simulate this effect in an uncontaminated image, we applied the so-called "Fermi model" described in Long et al. (2015).Persistence, in units of e − .s−1 ), is modeled as a function of the initial pixel level x p and time t: (3) The goal of Long et al. (2015) was to fit the model parameters x 0 , δx, α, γ using observations to later predict persistence for their detector.In our simulations, parameter values are randomized to represent various types and amounts of persistence (see Table 2).To compute the pixel value of the persistence effect, we derive the number of electrons emitted by the persistence effect during the exposure.In the following, we note T the duration of the exposure in which the persistence effect occurs, and ∆t the delay between that exposure and the previous one.We obtain the number of ADUs collected at pixel p during the interval [∆t, ∆t + T ] by integrating Eq. ( 3) and dividing by the gain G: These pixel values are then added to the uncontaminated image: where P are the persistence values computed in Eq. ( 5), P min and P max are the minimum and maximum of these values, and k P is a scaling factor empirically set to 5. Images of saturated stars are simulated using SkyMaker (Bertin 2009) and binarized to generate masks of saturated pixels.The masks define the footprints of persistence artifacts, within which the x p 's are computed (Table 2).An example is shown in Fig. 1.A p 1 x p (e − ) Poisson(x m ) with x m ∼ N(15.10 5 , (0.02 × 15.10 5 ) 2 ) x 0 (e − ) N(9.10 4 , (0.02 × 9.10 4 ) 2 ) δ x (e − ) N(18.10 3 , (0.02 × 18.10 3 ) 2 ) α 0.178 γ 1.078 G (e − .s−1 ) N(10, 1)
Trails (TRL)
Satellites or meteors, and even planes crossing the field of view generate long trails across the frame that are quasi-rectilinear.
We simulate these motion-blurred artifacts by generating close star images with identical magnitudes along a linear path using once again SkyMaker.We also generate a second population of trails with magnitude changes to account for satellite "flares".
A random, Gaussian-distributed component with a ≈1 pixel standard deviation is added to every stellar coordinate to simulate jittering from atmospheric turbulence, so that the stars are not aligned along a perfect straight line.For meteors, defocusing must be taken into account (Bektešević et al. 2018).The amount of defocusing θ, expressed as the apparent width of the pupil pattern in arc-seconds, is: where D is the diameter of the primary mirror, and d the meteor distance, both in meters.D and d are randomly drawn from flat distributions in the intervals [2, 8] and [80 000, 120 000], respectively.
The ground truth mask is obtained by binarizing the satellite image at a small and arbitrary threshold above the simulated background.This mask is then dilated using a 7 × 7 pixel structuring element.
To avoid any visible truncation, we add the whole simulated satellite image multiplied by a dilated version M (S) of the ground truth mask to the uncontaminated image: where σ S is the standard deviation of the satellite image background, σ U the standard deviation of the uncontaminated image background, and k T is a scaling factor empirically set to 6.An example of a satellite trail is shown in Fig. 1.
Fringes (FR)
Fringes are thin-film interference patterns occurring in the detectors.The irregular shape of fringes is caused by thickness variations within the thin layers.To add fringing to images, we use real fringe maps produced at the pre-processing level by Alambic for all the optical CCD cameras of Table 1.These reconstructed fringe maps are often affected by white noise, which we mitigate by smoothed using a top-hat kernel with diameter 7 pixels.The fringe pattern F can affect large areas in an image but not necessarily all the image.To reproduce this effect, a random 3rd-degree 2D polynomial envelope E that covers the whole image is generated.The final fringe envelope E (F) is computed by normalizing E over the interval [−5, 5] and flattening the result using the sigmoid function: where E min and E max are the minimum and maximum values of E p , respectively.The fringe pattern, modulated by its envelope, is then added to the uncontaminated image: where σ F is the standard deviation of the fringe pattern and k F is an empirical scaling factor set to 0.6.The ground truth mask is computed by thresholding the 2D polynomial envelope to −0.20.An example of a simulated contamination by a fringe pattern can be found in Fig. 2.
Nebulosity (NEB)
Extended emission originating from dust clouds illuminated by star light or photo-dissociation regions can be present in astronomical images.These "nebulosities" are not artifacts but they make the detection and measurement of overlapping stars or galaxies more difficult; they may also trigger the fringe detector.Hence, it is useful to have them identified and properly flagged.Because thermal distribution of dust closely matches that of reflection nebulae at shorter wavelength (e.g., Ienaka et al. 2013), we use far-infrared images of molecular clouds around star-forming regions as a source of nebulous contaminants.We choose pipeline-processed 250 µm images obtained with the SPIRE instrument (Griffin et al. 2010) on-board the Herschel Space Observatory (Pilbratt et al. 2010), which we retrieve from the Herschel Science Archive.The 250 µm channel offers the best compromise between signal-to-noise ratio and spatial resolution.Moreover, at wavelengths of 250 µm and above, low galactic latitude fields contain mostly extended emission from the cold gas and almost no point sources (apart from a few proto-stars and proto-stellar cores).Therefore, they are perfectly suited to being added to our optical and near-infrared widefield exposures.We do not resize or reconvolve the SPIRE images, taking advantage of the scale-invariance of dust emission observed down to the arcsecond level in molecular clouds (Miville-Deschênes et al. 2016).
We add the nebulous contaminant data to our uncontaminated images in the same way we do for fringes, except that there is no 2D polynomial envelope.The whole nebulosity image is background-subtracted (using a SExtractor-like background estimation) to form the final nebulosity pattern N which is then added to the uncontaminated image: where k N is an empirical scaling factor set to 1.3.The ground truth mask is computed by thresholding N at one sigma above 0.This mask is then eroded with a 6 disk diameter structuring element to remove spurious individual pixels, and dilated with a 22 disk diameter structuring element.An example of added nebulosity is shown in Fig. 2. The light from line-emission nebulae may not necessarily exhibit the same statistical properties as the reflection nebulae targeted for training.However line-emission nebulae are generally brighter and in practice the classifier has no problem detecting them.
Saturation and bleeding (SAT)
Each detector pixel can accumulate only a limited number of electrons.Once the full well limit is reached, the pixel becomes saturated.In CCDs, charges may even overflow, leaving saturation trails (a.k.a bleeding trails) along the transfer direction.Such pixels are easily be identified in clean images knowing for each instrument the saturation level.
Diffraction spikes
Diffraction spikes are patterns appearing around bright stars and caused by light diffracting around the spider supporting the secondary mirror.Given the typical cross-shape of spiders, the pattern is usually relatively easy to identify.In some A48, page 5 of 24 cases, the pattern can deviate significantly from a simple cross because it is affected by various effects, such as distortions, telescope attitude, the truss structure of spider arms, rough edges, or cables around the secondary mirror support, reflections on other telescope structures,. . .A specific strategy was put in place to build a spikes library to be used to train the CNN.
On the one hand, MegaCam and DECam are mounted on equatorial telescopes and the orientation of spikes is usually (under standard northeast orientation) a "+" for Megacam and an "x" for DECam1 .On the other hand, HSC is mounted on the alt-az Subaru telescope, and spikes do not display any preferred orientation, making their automated identification more complicated.For this reason, we define a two-step strategy, in which, first, samples of "+"-and "x"-shaped spikes are extracted from DECam and Megacam images, and randomly rotated to generate a library of diffraction spikes with various orientations.The library is then used to train a new CNN that for identifying spikes in HSC images.
MegaCam and DECam analysis.We first identify the brightest stars using SExtractor and extract 300 × 300 pixel image cutouts around them.The cutouts are thresholded at three sigma above the background and binarized.Element-wise products are computed between these binary images and large "+"shaped (Megacam) or "x"-shaped (DECam) synthetic masks to isolate the central stars.Each point-wise product is then matchedfiltered with a thinner version of the same pattern and binarized using an arbitrary threshold set to 15 ADUs.The empirical size of the spike components is estimated in these masks by measuring the maximum extent of the resulting footprint along any of the two relevant spike directions (horizontal and vertical or diagonals).Finally, the maximum size of the two directions is kept and empirically rescaled to obtain the final spike length and width.If the resulting size is too small, we consider that there is no spike in order to avoid false positives (e.g., a star bright enough to be detected by SExtractor but without obvious spikes).Figure 5 gives an overview of the whole process.
HSC analysis.We train a new neural network to identify spikes in all directions.For that purpose, we build a new training set using the spikes identified in MegaCam and DECam images as described above and apply a random rotation between 0 • and 360 • to ensure rotational invariance.The neural network has a simple SegNet-like convolutional-deconvolutional architecture (Badrinarayanan et al. 2015), but it is not based on VGG hyper-parameters (Simonyan & Zisserman 2014).It uses 21×21, 11 × 11, 7 × 7 and 5 × 5 convolutional kernels in 8, 16, 32 and 32 feature maps, respectively.The model architecture is shown in Fig. 3. Activation functions are all ELU except on the last layer where it is softmax.It is trained to minimize the softmax cross entropy loss with the Adam optimizer (Kingma & Ba 2014).Each pixel cost is weighted to balance the disproportion between spike and background pixels.If p s is the spike pixel proportion in the training set, then spike pixels are weighted with 1 − p s , while background pixel are weighted with p s (this is the two-class equivalent of the basic weighting scheme described in Sect.3.1).Once trained we run inferences on all the brightest stars detected with SExtractor in the HSC images.Output probabilities are binarized based on the MCC (see Eq. ( 22)) and the resulting mask is empirically eroded and dilated to obtain a clean mask.An example is given in Fig. 4.
Overscan (OV)
Overscan regions are common in CCD exposures, showing up as strips of pixels with very low values at the borders of the frame.To avoid triggering false predictions on real data, overscans must be included in our training set.Doing so, and although these are not truly contaminants, we find it useful to include an "overscan" class in the list of identified features.Overscan regions are simulated by including random strips on the sides of images.Pixel values in the strips are generated in the same way as bad pixel values.
2.1.12.Bright background (BBG) and background (BG) The objects of interest in this study are the contaminants.Hence, following standard computer vision terminology, all the other types of pixels, including both astronomical objects and empty sky areas, belong to the "background".
We find that defining a distinct class for each of these types of background pixels helps with the training procedure.We thus define the "bright background" (BBG) pixels as pixels belonging to astronomical objects2 (except nebulosity) present in the uncontaminated images, and background pixels (BG) as pixels covering an empty sky area.
Ground truth masks for bright background pixels are obtained by binarizing the image before adding the contaminants to 10σ U .The remaining pixels are sky background pixels, which are not affected by any labeled feature.
Global contaminants
We now describe the data used to identify global contaminants.Tracking errors happen when the telescope moves during an exposure due to, for instance, telescope guiding or tracking failures, wind gusts, or earthquakes.As illustrated in Fig. 6, this causes all the sources to be blurred along a path on the celestial sphere generated by the motion of the telescope.Because tracking errors affect the entire focal plane, the analysis is performed globally on the whole image.The library of real images affected by TR events is a compilation of exposures identified in the COSMIC-DANCE survey for the cameras of Table 1, and images that were gathered over the years at the UKIRT telescope, kindly provided to us by Mike Read.
Generating training samples
Both types of contaminants -global and local contaminantsmust be handled separately: they require different neural network architectures, and different training data sets as well.Figure 7 gives a synthetic view of the sample production pipeline and the various data sources.
The breakdown per imaging instrument of the COSMIC DANCe dataset is listed Table 3.
The following subsections treat about some special features of the sample generation.
Local contaminants
The order in which local contaminants are added is important.Bad columns, lines, and pixels are added last because they are static defaults defining the final value of a pixel, no matter how many photons hit them.
In our neural network architecture contaminant classes do not need to be mutually exclusive.Each pixel can be assigned several classes as several defaults can affect a given pixel A48, page 7 of 24 (e.g., fringes and cosmic ray hit).On the other hand, the faint background class that defines pixels not affected by any default excludes all other classes.A list of all the contaminants included in this study are presented in Table 4.
Figure 8 shows examples of local contaminant sample input images, each with its color-coded ground truth.
Global contaminants
The global contaminant dataset contains images that have been hand labeled as affected by tracking errors or not.The images, taken from the COSMIC DANCe archives, are not cleaned, hence they are potentially affected by preexisting local contaminants.This is because the global contaminant detector is intended to be operated before the local one.
Dynamic compression
All images are dynamically compressed before being fed to the neural networks using the following procedure: Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG).Pixels that belong to several classes are represented in black.In the interest of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.
The aim of dynamic compression is to reduce the dynamic range of pixel values, which is found to help neural network convergence.The image is first background subtracted.Then, a small random offset is added to increase robustness regarding background subtraction residuals.The resulting image is normalized by the standard deviation of the background noise and finally compressed through the arsinh function, which has the property to behave linearly around zero and logarithmically for large (positive or negative) values.
Data augmentation
We deploy data augmentation techniques to use our data to the maximum of its information potential.The two following data augmentation procedures are applied to the set of local contaminant training samples.First, random rotations, using as angles multiples of 90 • , are applied to cosmic ray, fringe patterns, and nebulosity patterns.Secondly, some images are rebinned.When picking up a clean image, we check if the image can be 2 × 2 rebinned with the constraint that the FWHM remains greater than 2 pixels -the FWHM of the image was previously estimated using SExtractor (Bertin & Arnouts 1996).This value is chosen on the basis of the plate sampling offered by current groundbased imagers.If the image can be 2 × 2 rebinned while meeting the condition above, it has a 50% probability to be rebinned.
Convolutional neural networks
In this section, we describe the convolutional neural networks used for our analysis.The first one, MaxiMask, classifies pixels ("local contaminants") while the second one, MaxiTrack, classifies images ("global contaminants").
A48, page 8 of 24 The first part contains single and double convolutional layers followed by max-pooling downsampling.This enables the network to compute relevant feature maps at different scales.During this step, max-pooling pixel indices are kept up for later reuse.
The second part also incorporates convolutional layers and recovers spatial resolution by upsampling feature maps using the max-pooling indices.An example of unpooling is given in Fig. 9.At each resolution level, the feature maps of the first part are summed with the corresponding upsampled feature maps to make use of the maximum of information.
The third part is made of extra unpool-convolution paths (UCPs) that recover the highest image resolution from each feature map resolution so that the network can exploit the maximum of information of each resolution.Thus, it results 5 pre-predictions, one for each resolution.
The 5 pre-predictions are finally concatenated and a last convolution layer builds the final predictions.The sigmoid activation functions in this last layer are not softmax-normalized, to allow non-mutually exclusive classes to be assigned jointly to pixels.All convolutional layers use 3 × 3 kernels and apply ReLU activations.The architecture is represented in Fig. 10 and hyperparameters are described more precisely in Table 5.The neural network is implemented using the TensorFlow library (Abadi et al. 2016) on a TITAN X Nvidia GPU.
Training and loss function
Training is done for 30 epochs on 50 000 images, with minibatches shuffled at every epoch.The batch size is kept small (10) to maintain a reasonable memory footprint.The model is trained end-to-end using the Adam optimizer (Kingma & Ba 2014).The loss function L is the sigmoid cross-entropy (Rubinstein 1999) summed over all classes and pixels, and averaged across batch images: where C p,b ⊂ C is the set of contaminant classes labeling pixel p of image b in the batch.In order to improve the back-propagation of error gradients down to the deepest layers, several losses are combined.In addition to the main sigmoid cross-entropy loss L computed on the final predictions, we can compute a sigmoid cross-entropy for each of the 5 pre-predictions.There are several ways to associate all of these losses.Like Yang et al. (2018), we find that adding respectively 33% or 50% of each of the 3 or 2 smallest resolution losses to the main loss works best.The two main rules here are that the additional loss weights should sum to 1 and that higher resolution pre-predictions become less informative as they get closer to the one at full resolution.
Basic training procedures are vulnerable to strong class imbalance, which makes it more likely for the neural network to converge to a state where rare contaminants are not properly detected.Contaminant classes are so statistically insignificant (down to one part in 10 6 with real data, typically) that the classifier may be tricked into assigning all pixels to the background class.To prevent this, we start by applying a basic weighting scheme to each pixel according to its class representation in the training set, that is each pixel p of batch image b belonging to classes in C p,b is weighted by w p,b defined as with where P(ω c |T ) is the fraction of pixels labeled with class ω c in the training dataset T .The P(ω c |T )'s do not sum to one as many pixels belong to several classes and are thus counted several times.We find that the weighting scheme brings slightly better results and less variability in the training if weights are computed at once from the class proportions of the whole set, instead of being recomputed for each image.From Eq. ( 15) we have: and However, with this simple weighting scheme, background class pixels that are close to rare features are given very low weights, although they are decisive for classification.To circumvent this, weight maps are smoothed with a 3 × 3 Gaussian kernel with unit standard deviation so that highly weighted regions spread over larger areas.Other kernel sizes and standard deviations were tested but we find 3 and 1 to give the best results.The resulting weights of this smoothing are the w p,b presented in the loss function of Eq. ( 12).
Finally, the solution is regularized by the l2 norm of all the N network weights, by adding the following term to the total loss: where the k i 's are the convolution kernel vectors.λ sets the regularization strength.We find λ = 1 to provide the best results.Notes.All convolution kernels are 3 × 3 and max-pooling kernels are 2 × 2. All activation functions (not shown for brevity) are ReLU, except in the output layer where the sigmoid is used.
Global contaminant neural network architecture
The convolutional neural network that detects global contaminants (tracking errors), MaxiTrack, is a simple network made of convolutional layers followed by max-pooling and fully connected layers.The architecture of the network is schematized in Fig. 11 and detailed in Table 6.Because the two classes are mutually exclusive (affected by tracking errors or not), we adopt for the output layer a softmax activation function and a softmax cross-entropy loss function (Rubinstein 1999).Training is done for 48 epochs on 50 000 images with a mini-batch size of 64 samples, using the Adam optimizer.
Local contaminants neural network
We evaluate the quality of the results in several ways.First, we estimate the performance of the network on test data, both quantitatively through various metrics, and qualitatively.We verify that there is no over-fitting by checking that performance on the test set is comparable to that on the training set.Next, we show that performance is immune to the presence or absence of other contaminants in a given image.We finally compare the performance of the cosmic ray detector to that of a classical algorithm.
Performance metrics
We first estimate classification performance on a benchmark test set comprising 5000 images.Because the network is a binary classifier for every class, we can compute a Receiver Operating Characteristic (ROC) curve for each of them.ROC curves represent the True Positive Rate (TPR) vs. the False Positive Rate (FPR): Notes.All convolution kernels are 9 × 9 and max-pooling kernels are 2 × 2. All activation functions (not shown for brevity) are ReLU, except in the output layer where predictions are done using softmax.
where P is the number of contaminated pixels, TP is the number of true positives (contaminated pixels successfully recovered as contaminated), FN is the number of false negatives (contaminated pixels wrongly classified as non-contaminated), N is the number of non-contaminated pixels, FP is the number of false positives (non-contaminated pixels wrongly classified as contaminated), and TN is the number of true negatives (non-contaminated pixels successfully recovered as noncontaminated).
The accuracy (ACC) is subsequently defined as The more the ROC curve bends toward the upper left part of the graph, the better the classifier.However with strongly imbalanced datasets, such as our pixel data, one must be very cautious with the TPR, FPR and ACC values for assessing the quality of the results.For example, if one assumes that there are 1000 pixels of the contaminant class (P) and 159 000 pixels of the background class (N) in a 400 × 400 pixel sub-image, a TPR of 99% and a FPR of 1%, corresponding to an accuracy of 99%, would actually represent a poor performance, as it would imply 990 true positives, 10 false negatives, 157 410 true negatives, and 1590 false positives.In the end, there would be more false positives FP (pixels wrongly classified as contaminated) than true positives TP.
For this reason the ROC curves in Fig. A.1 are displayed with a logarithmic scale on the FPR axis.We require the FPR to be very low (e.g smaller than 10 −3 ) to consider that the network performs properly.
On the other hand, recovering the exact footprint of large, fuzzy defects is almost impossible at the level of individual pixels, which makes the classification performance for persistence, satellite trails, fringes, nebulosities, spikes and background classes look worse in Fig. A.1 than it really is in practice.Also, two ROC curves are drawn for cosmic rays and trails.The second one (in green) is computed using only the instances of the class that are above a specific level of the sky background.These instances were defined by retaining those which had more than a half of their pixels above 3σ.These second curves shows that the network performs better on more obvious cases.
In addition to the FPR, TPR, ACC and AUC, we use two other metrics helpful for assessing the network performance: the purity (or precision), representing the fraction of correct predictions among the positively classified samples, and the Matthews correlation coefficient (MCC, Matthews 1975), which is an accuracy measure that takes into account the strong imbalance between classes.
PUR = T P T P + FP
= Purity or Precision, ( 21) In the above example, the purity would reach only 38% and the MCC only 61%, highlighting the classifier poor positive class discrimination.
Figure A.3 shows the true positive rate against the purity.Again, the purple curve represents how a random classifier would perform.In these curves the best classifier would sit in the top right (T PR = 1 and PUR = 1).The darkest points also represent lowest thresholds while the lighter are the highest ones.
Some qualitative results are presented in Fig. 12.A given pixel is assigned a given class if its probability to belong to this class is higher than the best threshold in the sense of the MCC.
Finally, MCCs are represented in Fig. A.2, as a function of the output threshold.In each curve, the threshold giving the best MCC is annotated around the best MCC point.It is important to note that the best threshold depends on the modification of the prior that has been applied to the raw output probabilities.This update of the prior is explained in Sect. 5.
Robustness regarding the context
The MaxiMask neural network is trained using mostly images that include all contaminant classes.Hence, we must check if the network performs equally well independently of the context, A48, page 11 of 24 that is if it delivers equally good results for images containing, for example, a single class of contaminant.
To this aim, for every contaminant class, we generate a dataset of 1000 images affected only by this type of contaminant (except saturated and background pixels), and another dataset of 1000 images containing only saturated and background pixels.We then compare the performance of MaxiMask for each class with the that obtain on the corresponding dataset.We find that performance (AUC) is similar or even slightly higher for the majority of the classes.This shows that the network is not conditioned to work only in the exact context of the training.The results are presented in Table 7.
As it can be seen, for all classes but fringes and nebulosity, performance improves when a single type of contaminant is present.The slight improvement may come from the fact that ambiguous cases (when pixels are affected by more than one contaminant class, e.g., a cosmic ray or a hot pixel over a satellite trail) are not present in the single contaminant test set.To do so, we generate two datasets containing only the cosmic ray contaminant class (plus object and background).A well sampled set of images with FWHMs larger than 2.5 pixels, and an undersampled image set with FWHMs smaller than 2.5 pixels.We run MaxiMask and the Astro-SCRAPPY Python implementation LA Cosmic.To make a fair comparison, LA Cosmic masks are dilated in the same way as the ground truth cosmic ray masks of MaxiMask.However, while MaxiMask generates probability maps that can be thresholded at different levels, LA Cosmic only outputs a binary mask.To compare the results we therefore build ROC curves for the neural network and over-plot a single point representing the result obtained with LA Cosmic.
Figure 13 shows that the neural network performs better than LA Cosmic in both regimes with our data.
Global contaminants neural network
The ROC curve for the global contaminant neural network is shown in Fig. 14.It is computed from a test set of 5000 images.
Modifying priors
If one knows what class proportions are expected in the observation data, output probabilities can be updated to better match these priors (e.g., Saerens et al. 2002;Bailer-Jones et al. 2008).
The outputs of a perfectly trained neural network classifier with a cross-entropy loss function can be interpreted as Bayesian posterior probabilities (e.g., Richard & Lippmann 1991;Hampshire & Pearlmutter 1991;Rojas 1996).Under this assumption and using Bayes' rule, the output for the class ω c of the trained neural network model defined by a training set T writes: where x is the input image data around the pixel of interest, p(x|ω c , T ) is the distribution of x conditional to class ω c in the training set T , and P(ω c |T ) is the prior probability of a pixel to belong to the class ω c in the trained model.
As each output acts as a binary classifier, the sum is done on the class ω c (contaminant) and its complementary ωc ("not the contaminant").
With the observation data set O we may similarly write: = 1 1 If pixels were all weighted equally, the training priors P(ω c |T ) would simply be the class proportions in the training set.However, this is not the case here, and pixel weights have to be taken into account.To do so, we follow Bailer-Jones et al. ( 2008)'s approach, by using as an estimator of P(ω c |T ) the posterior mean on the test set T (which by construction is distributed identically to the training set): These corrected probabilities are used to compute the MC coefficient curves in Fig . A.2 (whereas the prior correction does not affect the ROC and purity curves).
MaxiMask comes with the P(ω c |T ) values already set, therefore one only needs to specify the expected class proportions in the data, that is the P(ω c |O)'s.
Application to other data
As a sanity check, we apply MaxiMask to data obtained from different instruments not part of the training set.Examples of the resulting contaminant maps are shown in appendix.
Our first external check is with ZTF (Bellm et al. 2019) data.The MaxiMask output for a science image featuring a prominent trail with variable amplitude is shown in Fig. A.4.We can note the ability of MaxiMask to properly flag both the trail and overlapping sources.
Our second external check is with the ACS instrument onboard the Hubble Space Telescope (Fig. A.5 and A.6).This test illustrates MaxiMask's ability to distinguish cosmic rays from poorly sampled, diffraction-limited point source images.
A48, page 13 of 24 Given the seemingly good performance of MaxiMask on images from instruments not part of the training set, one question that may arise is whether MaxiMask can readily be used on production for such instruments, without any retraining or transfer learning.Our limited experience with MaxiMask seems to indicate that this is indeed the case, although retraining may be beneficial for specific instrumental features.As shown here, excellent performance can be reached by training with 50 000 400 × 400 images taken from three different instruments.We think that a minimum of 10 000 400 × 400 would be a good start to train on a single instrument.Assuming CCDs of approximately 2000 × 2000 pixels, thus containing 25 400 × 400 images, it would just need 400 CCDs, equivalent to 10 fields for a 40 CCD camera.
Our last series of tests is conducted on digital images of natural scenes (landscape, cat, human face), to check for possible inconsistencies on data that are totally unlike those from the training set.Reassuringly, the maps produced by MaxiMask are consistent with the expected patterns.For instance, the cat's whiskers are identified as cosmic ray impacts, and pixels with the lowest values as bad pixels.
Using MaxiMask and MaxiTrack
MaxiMask and MaxiTrack are available online3 .MaxiMask is a Python module that infers probability maps from FITS images.It can process a whole mosaic, a specific FITS image extension, or all the FITS files from a directory or a file list.For every FITS file being processed a new FITS image is generated with the same HDU (Header Data Unit) structure as the input.Every input image HDU has a matching contaminant map HDU in output, with one image plane per requested contaminant.The header contains metadata related to the contaminant, including the prior and threshold used.An option can be set to generate a single image plane for all contaminants, using a binary code for each contaminant.Such composite contaminant maps can easily be used as flag maps, for example, in SExtractor.Based on command line arguments and configuration parameters, one can select specific classes, apply updates to the priors and thresholds to the probability maps.The code relies on the TensorFlow library and can work on both CPUs or GPUs, although the CPU version is expected to be much slower: MaxiMask processes about 1.2 megapixel per second with an NVidia Titan X GPU, and about 60 times less on a 2.7 GHz Intel i7 dual-core CPU.Yet, there is probabily room for improvement in processing efficiency for both the CPU and GPU versions.
MaxiTrack is used the same way as MaxiMask, except that the output is a text file indicating the probability for the input image(s) to be affected by tracking errors (one probability per extension if the image contains several HDUs).It can also apply an update to the prior.It runs at 60 megapixels s −1 with an NVidia Titan X GPU and is 9 times slower on a 2.7 GHz Intel i7 dualcore CPU.
Summary and perspectives
We have built a data set and trained convolutional neural network classifiers named MaxiMask and MaxiTrack to identify contaminants in astronomical images.We have shown that they achieve good performance on test data, both real and simulated.By delivering posterior probabilities, MaxiMask and MaxiTrack give the user the flexibility to set appropriate threshold levels and achieve the desired TPR/FPR trade-offs depending on the scientific objectives and requirements.Both classifiers require no input parameters or knowledge of the camera properties.
Even though the mix of contaminants in the training set is unrealistic, being dictated by training requirements, we have checked that this does not impact performance.Output probabilities can be corrected to adapt the behavior of MaxiMask to any mix of contaminants in the data.
We are aware that several types of contaminants and images are missing from the current version and may be added in the future.
Local contaminants include two particularly prominent classes of contaminants: optical and electronics ghosts.Unwanted reflections within the optics result in stray light in exposures.These reflections can produce spurious images from bright sources commonly referred to as "optical ghosts".Sometimes, reflections from very bright stars outside of the field may also be seen.Detectors read through multiple ports also suffer from a form of electronic ghost known as cross-talk.Electronic cross-talk causes bright sources in one of the CCD quadrants to generate a ghost pattern in other quadrants.The ghosts may be negative or positive and are typically at the level of 1:10 4 .Both effects are a significant source of nuisance in wide field exposures, especially in crowded fields and deep images, where they generate false, transient sources, and can affect high precision astrometric and photometric measurements.
Another category of common issues is defocused or excessively aberrated exposures, as well as trails caused by charge transfer inefficiency, all of which which could easily be implemented in MaxiTrack.Also, the training set used in the current version of MaxiMask and MaxiTrack does not include images from space-born telescopes nor, more generally, diffraction-limited imagers.Therefore, they are unlikely to perform optimally with such data, although limited testing indicates that they may remain usable for most features, an example of prediction on HST data is shown in Figs.A.5 and A.6.Finally, MaxiMask could be extended to not only detect contaminants, but also to generate an inpainted (i.e., "corrected") version of the damaged image areas wherever possible.Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG).Pixels that belong to several classes are represented in black.For the sake of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.
Fig. 1 .
Fig. 1.Examples of contaminants and their ground truth.Top row: cosmic ray hits, hot columns, bad columns.Bottom row: bad lines, persistence, satellite trails.
Fig. 4 .
Fig. 4. Example of a spike mask obtained by inference of the separate neural network.
Fig. 5 .
Fig.5.Empirical spike flagging process.From left to right: source image centered on a bright star candidate, the same image thresholded, the two pointwise products, the matched filtered pointwise products, the final mask drawed from the empirical size computed with the two previous masks.
Fig. 7 .
Fig. 7. Schematic view of the sample production pipeline.All COSMIC-DANCe archive images have their background map computed.Clean images are built from the COSMIC-DANCe archives.Contaminants from diverse sources (COSMIC-DANCe archives, Herschel archives or simulations) are added to clean images; this step uses the background maps.The resulting local contaminant images are dynamically compressed (see Sect. 2.3.3) and ready to be fetched into the neural network.Global contaminant samples are directly obtained from the COSMIC DANCe archives and dynamically compressed.
Fig. 8 .
Fig.8.Examples of input (left) and their ground truth (right).Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG).Pixels that belong to several classes are represented in black.In the interest of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.
Fig. 9 .
Fig. 9. Example of an unpooling process.Indices of max-pooling are kept up and reused to upsample the feature maps.
p,c log ŷb,p,c + (1 − y b,p,c ) log(1 − ŷb,p,c ) , (12) where B is the set of batch images, P is the set of all image pixels, C is the set of all contaminant classes, w p,b is a weight applied to pixel p of image b in the batch (see below), ŷb,p,c is the sigmoid prediction for class ω c of pixel p of image b in the batch, and y b,p,c is the ground truth label for class ω c of pixel p of image b defined as:
Fig. 10 .
Fig. 10.Scheme representation of the local contaminants neural network architecture.
Fig. 11 .
Fig. 11.Scheme representation of the global contaminants neural network architecture.
Fig. 12 .
Fig. 12. Examples of qualitative results on test data.Left: input; middle: ground truth; right: predictions.Each class is assigned a color so that the ground truth can be represented in one single image.Class predictions are done according to the threshold giving the highest MC coefficient.The color coding is identical to that of Fig. 8.
where P(ω c |O) is the expected fraction of pixels with class ω c in O. Now, if the appearance of defects in O matches that in the training set T , we have p(x|ω c , T ) = p(x|ω c , O), and we can rewrite (24) as: P(ω c |x, O) = P(ω c |x, T ) P(ω c |O) P(ω c |T ) ω∈{ω c , ωc } P(ω|x, T ) P(ω|O) P(ω|T )
Fig. A. 4 .
Fig. A.4. Prediction example for an instrument not used in training: ZTF(Bellm et al. 2019).Left: a science image exposure.Top right: mask from the ZTF pipeline.Bottom right: flagging by MaxiMask; the trail is correctly recovered.Also, MaxiMask CNN is able to correctly flag pixels where the trail overlaps sources whereas in the ZTF pipeline, all pixels (i.e., pixels belonging only to the trail, pixels belonging only to sources, and pixels belonging to both the trail and sources) are flagged as both trail and source.
Fig. A. 5 .
Fig. A.5. Example of a prediction for a space instrument (HST) not used in training (ACS exposure).Left: a calibrated (flat-fielded, CTE-corrected) individual exposure of a stellar field in the Pleiades.Top right: fully calibrated, geometrically-corrected, dither-combined image where cosmic rays and artifacts have been removed.Bottom right: MaxiMask contaminant identification.Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG).Pixels that belong to several classes are represented in black.For the sake of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.
Fig. A. 6 .
Fig. A.6.Same as Fig. A.5 at a different location in the field to illustrate the ability of MaxiMask to differentiate poorly sampled stellar images from cosmic rays.
Table 1 .
Instruments used in this study.
Table 2 .
Parameters used for the generation of persistence.
Table 3 .
COSMIC-DANCE archive usage per imaging instrument.
Notes.Clean is for uncontaminated images, CR for dark images used for cosmic ray identification, No TR is for images not affected by tracking errors, and TR for images affected by tracking errors.
Table 4 .
All the contaminants and their abbreviated names.
Table 5 .
Description of the local contaminants neural network architecture, including map dimensions.
Table 6 .
Description of the global contaminant neural network architecture, including map dimensions.
Table 7 .
AUC of each class depending on the test set context. | 13,392.4 | 2019-07-18T00:00:00.000 | [
"Physics"
] |
Towards experimental quantum-field tomography with ultracold atoms
The experimental realization of large-scale many-body systems in atomic-optical architectures has seen immense progress in recent years, rendering full tomography tools for state identification inefficient, especially for continuous systems. To work with these emerging physical platforms, new technologies for state identification are required. Here we present first steps towards efficient experimental quantum-field tomography. Our procedure is based on the continuous analogues of matrix-product states, ubiquitous in condensed-matter theory. These states naturally incorporate the locality present in realistic physical settings and are thus prime candidates for describing the physics of locally interacting quantum fields. To experimentally demonstrate the power of our procedure, we quench a one-dimensional Bose gas by a transversal split and use our method for a partial quantum-field reconstruction of the far-from-equilibrium states of this system. We expect our technique to play an important role in future studies of continuous quantum many-body systems.
C omplex quantum systems with many degrees of freedom can now be controlled with unprecedented precision, giving rise to applications in quantum metrology 1 , quantum information 1,2 and quantum simulation 3,4 . This holds true specifically for architectures based on trapped ions 5 and ultracold atoms 3,6-8 , where large system sizes can now routinely be realized, while still maintaining control down to the level of single constituents. In the light of this development, the mindset has shifted when it comes to the assessment and verification of preparations of quantum states. Traditionally, experiments are being used as a vessel to test the validity of theoretical models by comparing their predictions to specific experimental output. With quantum experiments of many degrees of freedom becoming significantly more accurate, an attitude of 'quantum engineering' and quantum simulation is taking over. Compared with the traditional mindset, one does not compare the experimental data to predictions from theoretical models, but rather uses the full capabilities of the experimental setup as an investigative tool for the physical situation at hand. Triggered by this development and driven by the goal to maximize the information extracted from the experiment, the standards in quantum system identification have substantially risen. Quantum-state tomography [9][10][11] fulfils this need for precise and model-independent quantum-state identification. It asks the question: given data, what is the unknown quantum state compatible with those data? Maybe unsurprisingly, the interest in the field of quantum system identification and quantum-state tomography has exploded in recent years [10][11][12][13] .
For many degrees of freedom, unqualified quantum state tomography must be inefficient in the system size, as exponentially many numbers need to be specified. This problem has given way to the insight that practically only the states found in experiments need to be reconstructed, which form only a small subset of the full Hilbert space 14,15 . Accordingly, more efficient tomography tools 9 have been developed, ranging from quantum compressed sensing 10 (for states of approximately low rank), over permutation-invariant tomography, to matrix-product state tomography [11][12][13]16 . These approaches are based on using the right 'data set' having the appropriate 'sparsity structure' to capture quantum many-body systems. For discrete systems, matrix-product states efficiently capture the low-energy behaviour of locally interacting models and a large body of literature in the condensed-matter context backs up this intuition of the 'physical corner of Hilbert space' 14,15,17 .
In this work, we consider continuous systems, in which the tomographic problem is aggravated due to the fact that, in principle, infinitely many degrees of freedom need to be reconstructed. On the basis of the notion of sparsity, we present a novel quantum-field tomography procedure relying on the class of continuous matrix-product states (cMPS) 18,19 . This approach will allow us to give evidence that the state encountered in the laboratory is well approximated by a representative of this class.
Results
Quantum-field tomography. We apply our procedure to nonequilibrium experiments of a continuous quantum gas of one species of bosonic particles whose correlation behaviour can be captured by translation invariant states of the form HereĉðxÞ, xA[0, L] are the canonical bosonic field operators, O j i is the vacuum state vector, Q, RAC d  d are matrices acting on an auxiliary d-dimensional space and completely parametrize the state. L is the length of the closed physical system, P denotes the path ordering operator and Tr aux traces out the auxiliary space. The bond dimension d takes the same role as the bond dimension for matrix-product states: Low entanglement states are expected to be well approximated by cMPS of low bond dimension; in turn, for suitably large d, every quantum-field state can be approximated.
We employ our reconstruction procedure to perform quantum state tomography for a one-dimensional (1D) system of ultracold Bose gases, an architecture that provides one of the prime setups for exploring the physics of interacting quantum fields 6,20,21 . The experiment consists of a large 1D quasi-condensate that is trapped using an atom chip 22 . To bring the system out of equilibrium, a split transversal to the condensate direction is performed. The subsequent out-of-equilibrium dynamics after the quench leads to apparent equilibration, prethermalization and thermalization 6,23,24 . In the middle of the trap, the system can be well approximated by two parallel quantum fields that are homogeneous and translationally invariant.
The experiment proceeds by performing a joint time-of-flight measurement of the two quasi-condensates. Since the experimentally measured images are single-shot measurements, repeating the experiment many times with identical initial conditions allows to extract the phase differenceŷ x of the two quasi-condensates at different longitudinal position x and construct higher order correlation functions 6,25 . The phase correlation functions are defined as whereŷ x are the measured phase differences and the angular brackets denote the ensemble average (Methods section).
To capture these correlation function in terms of a cMPS, we use a description in terms of effective field operators for the phase differenceĉ wheren are density operators. As no density information could be obtained from the experiment in its current form, the expectation value of these operators remains unknown and our work is a partial reconstruction of the state. However, the obtained cMPS contains its full phase correlation behaviour. Using this description, we can write an n-point phase correlation functions as Since it is sufficient for performing the tomography procedure, we will use the correlation information of the normal ordered subset with x 1 rx 2 r?rx n of the even-order correlation functions. In the cMPS language, assuming translation invariance and the thermodynamic limit, this can be reformulated as l k being the eigenvalues of the transfer matrix T, and M being in the diagonal basis of T (Methods section) 16 . The reconstruction proceeds by first extracting the eigenvalues l k from the two-point correlation function and in a second step, determining a compatible M matrix 26 from the four-point correlators.
Data analysis. We find that a cMPS with d ¼ 2, corresponding to four reconstructed poles and a 4 Â 4 matrix M, matches the data. This indicates that the correlation function has a simple structure as one would expect from such local physical interactions (specifically based on previously explored descriptions in terms of a Luttinger liquid theory 6 ). More importantly, no previously known theoretical description of the physical situation at hand is needed since the cMPS ansatz can be applied to any locally interacting quantum field. To estimate the performance of the reconstruction of the four-point correlation function, we use the mean relative deviation (Methods section), and find a small error of 1.4%, which is of the same magnitude as the experimental errors 6 .
Approximating a correlation function can be done in many ways and it is, a priori, not clear that one has truly gained knowledge about the state. The advantage of the cMPS ansatz is that the approximation performed is sufficient to fully reconstruct the phase correlation behaviour of the cMPS. We build trust in the reconstructed state by using it to predict higher order correlation functions, which in turn can be experimentally checked. This provides an excellent benchmark for our procedure and allows us to estimate the quality of our guess for the unknown experimental state. Specifically, we obtain an error of 3.2% for the six-point function ( Fig. 1), estimated with bootstrapping techniques. This shows that the reconstruction of the full correlation behaviour of the state was successful, providing a proof-ofprinciple application for efficient state tomography of interacting many-body quantum fields.
We have performed our reconstruction of the six-point correlator for different hold times after the quench and observe that the fit quality drops substantially with increasing time with mean relative deviations of 3.2%, 10.7% and 34.1% for times t ¼ 3, 7 and 23 ms, respectively (Fig. 2). There are several possible explanations for this decrease in reconstruction quality. While quantum-field tomography necessarily has to rely on a finite-dimensional 'data set', it is clear that not all situations can be captured equally well by the approach proposed here. This method applies to states of low entanglement, a situation expected to be present for ground states or states in non-equilibrium following quenches for short times. It will surely be difficult to capture highly entangled or thermal states, which are expected to have a high description complexity, with these tools 26 .
Discussion
The physics of sudden quenches in discrete settings is usually connected to a linear entanglement growth with time 15,23,27 , while for each time satisfying an area law in space 15 . Note that while the continuous physical system at hand can be well captured with a free Tomonaga-Luttinger liquid model 28,29 , the states of the system can still be strongly entangled, in the sense that entanglement entropies across any real-space cut of the system are, in principle, arbitrarily large. It is precisely this spatial entanglement that will surely influence the quality of tensor network descriptions of the state and that is a key factor for the quality of any cMPS reconstruction 26 . Since our cMPS reconstruction with d ¼ 2 is only well-suited for states with low entanglement, a similar entanglement buildup for the performed sudden quench of quantum fields would be a natural explanation. Indeed, such light cone dynamics for the correlations of these systems 6,30,31 have recently been made explicit experimentally. Such entanglement growth could conceptually be unveiled by investigating how the fit quality changes when the bond dimension is increased. Given the structure of the data set (analysis contained in the Methods section) and the increase of experimental errors with hold time, the exploration of this observation lies outside the scope of this work, but is surely an interesting topic for the near future.
Experimental imperfections or the remaining actual temperature could be other sources for the decrease in fit quality with hold time, as they lead to a mixed state, thus impeding our description in terms of pure states. Previous studies, however, successfully described the system in terms of a pure state Luttinger liquid, even for long evolution times 31 . Moreover, the experimental data was taken in the middle of the trap, where, initially, the assumption of translational invariance holds up to excellent accuracy. For long hold times after the quench, however, regions outside of the center of the trap will have an influence on the behaviour of the system in the middle 6 , thus making the data less translational invariant (Methods section).
The work presented here is surely a first step in the direction of a larger programme, advocating a paradigm change in the evaluation of experimental data from atomic-optical architectures. Instead of comparing predictions of an assumed theoretical model with data, one puts the data into the focus of attention and attempts a reconstruction in the mindset of quantum tomography. This, in particular, seem an important development in the context of quantum simulators, which have This image shows the volumetric elements of certain projections of the high-dimensional six-point correlation function array and demonstrates a great overall agreement between experimental data and the predicted correlation data. In c, the absolute difference between the experimental and the predicted data points for the projection C (4) (0, 2, x 3 , x 4 ) is shown as a bar plot, the statistical uncertainties of the data as a transparent mesh. More quantitatively, as a figure of merit for measuring the performance of the reconstruction, we use the mean relative deviation over all indices belonging to the relevant simplex of the data with x 1 r x 2 r Á Á Á r x 6 (Methods section) and find a mean error of 2.5% and a maximum relative deviation of 9.1%. NATURE COMMUNICATIONS | DOI: 10.1038/ncomms8663 ARTICLE the potential to address questions on interacting quantum systems that are inaccessible with classical means. While partial information of the results of a quantum simulator can easily be accessed, a full read-out necessarily corresponds to performing quantum tomography where feasible tools are still lacking. The present work offers a step forward and presents a novel tool to obtain and build trust in the complete results of a quantum simulation without having to include any information of the underlying Hamiltonian of the system.
Experiment.
A single specimen of an ultracold gas of 87 Rb atoms is prepared using evaporative cooling on an atom chip. The final temperature and the chemical potential of the gas are both well below the first radially excited state of the trapping potential, implementing a 1D bosonic system that is well approximated by the Lieb-Liniger model. The systems contain several thousand atoms and spread over sizes as large as 100 mm. A sudden global quench is realised by transversally splitting the gas into two mutually coherent halves 32 , leading to an out-ofequilibrium, approximately pure state. The setup in principle allows for different splitting procedures, in particular an experimental scheme to test the Unruh effect with a specially modelled split has recently been proposed 33 . Subsequently, this non-equilibrium system is let to evolve in the trap for a variable hold time. Its dynamical states are probed using matter wave interferometry in time-of-flight, which enables the direct measurement of the local relative phase y x . Since the experimentally measured images are single-shot measurements, repeating the experiment many times with identical initial conditions allows to measure not only the mean of the correlations, but also higher order correlation functions are accessible 6 . The corresponding correlation functions are constructed by averaging over B150 experimental realizations.
We are restricted to even-order correlation functions in the experiment. The reason for this is the fact that many experimental realizations are needed to construct the correlation functions. Each of these experimental realizations provides us with a measurement of the relative phase y x ¼ f(x) þ j. Here f is the actual fluctuating phase that contains the interesting many-body physics and j is a small global phase diffusion that is random in every experimental realization 32 . This global phase diffusion results from small shot-to-shot fluctuations in the electrical currents that create the trapping potential. These cause small random imbalances of the double well, leading to random and unknown values for j. For the even-order correlation functions only differences between the y at different positions need to be evaluated. Consequently, the global shifts j cancel automatically. However, for odd-order correlation functions contributions Be ij remain. Hence, the measured result does not only contain the pure dynamics, but is significantly perturbed by the unknown fluctuations of j.
Reconstruction procedure. To make the correlation function in equation 2 directly accessible to our reconstruction procedure, we write it in terms of field operatorsĉðxÞ. For this purpose, we use the fact thatŷ x commutes for different positions and employ the polar decomposition to construct an effective field operatorĉ y ðxÞ ¼nðxÞ 1=2 e iŷx ; ð7Þ wherenðxÞ ¼ĉ y ðxÞĉðxÞ is taken to be the density of one of the two condensates. The construction ensures that these effective field operators indeed fulfil the correct commutation relations. Equation 4 follows immediately.
In the cMPS formalism, the translationally invariant correlation functions in equation 4 can be directly calculated in terms of the cMPS variational parameter matrices R and Q in the thermodynamic limit as with the transfer matrix and positive distances t j ¼ x j þ 1 À x j for j ¼ 1,y,n À 1. The overline denotes complex conjugation. This form of the correlator can be derived by the correspondences between field operators and variational matrices as described in refs 18,19. By writing all the matrices in the basis where the transfer matrix T is diagonal and performing the limit L-N, the correlation function takes the form r k1; ... ;kn À 1 e lk 1 t1 . . . e lk n À 1 tn À 1 : The l k are the eigenvalues of the transfer matrix T, also known as poles and the pre-factors, usually refered to as residues, are r k1 ; ... ; k n À 1 ¼ M À 1 1;k n À 1 M k n À 1 ;k n À 2 . . . M À 1 k2 ;k1 M k1;1 ; ð11Þ where X has been chosen such that X À 1 TX is diagonal 16,26 . For a fixed bond dimension, there are in general d 2 poles and M 2 C d 2 Âd 2 . Note that this is different from the definition in ref. 26 where the matrix M stems from density-like correlation functionsÔ There, according to the calculus of cMPS correlation functions, the field operator term for each position corresponds to the matrix R R.
Note that equating two consecutive indices k j , k j þ 1 in the n-point function in equation 10 leads to a (n À 2)-point function, as expected from equation 2. Specifically, there are many equivalent projections of a four-point function that correspond to two-point functions. However, due to imperfections (that is, deviations from translational invariance), the experimental realizations of these projections are not identical. Averaging over the projections leads an expression of the same form of a two-point correlation function from a translationally invariant cMPS as follows, The reconstruction starts by extracting the eigenvalues l k from the averaged twopoint correlation function using a least-squares fit and under the assumption of translational invariance for the modelled system. The suitable bond dimension for the data at hand can already be judged at this point, by analysing the structure of the two-point correlation function. To determine all entries of M, n-point functions with n42 have to be taken into account, since for n ¼ 2, only the entries M 1,k À 1 and M k,1 appear, see equation 15. Since multiplying M with a constant and conjugating it with a diagonal matrix whose first entry is equal to one leaves all properties considered in this work invariant, we can require that M 1,k ¼ 1 for each k ¼ 1,y, d 2 (refs 16,26). The remaining independent entries of the M matrix are fixed by included four-point correlation data. For this, we use a Nelder-Mead simplex algorithm that varies the parameters of the M matrix, and calculates the corresponding residues according to Each choice of an M matrix thus gives a prediction for the four-point correlators and the agreement with the experimental data is taken as the quality indicator for the algorithm. Working with a cMPS with bond dimension d ¼ 2 and relying on a set of 100 random initial numerical seeds proved to be sufficient for approximating the measurement data well. Taking into account the gauge and symmetry arguments 26 , the employed cMPS, with bond dimension d ¼ 2 in terms of l k and M, has 15 independent parameters in total.
As discussed in the main text, we see a significant decrease of the fit quality with hold time. There are many issues entering here. One would naturally expect that entanglement entropies after the sudden quench grow over time leading to the need for a larger bond dimension. This is presumably the case, but in our analysis, this is mostly masked by two other effects. First, the statistical error in the experiment increases substantially with the hold time, making the data for longer times considerably less reliable (Fig. 2) and also questioning our fit in terms of a pure state. What is more, the translational invariance assumption is slowly violated as the hold time increases. This is not surprising, since the light-cone-like dynamics of the trapped system give good reason to believe that trap effects need time to enter the center part of the system. As a quantitative probe to estimate how translational invariant the data are, we consider the two-point correlation function at 21 different points and calculate the variance over those different positions for variable distances. The mean of those variances gives a good indicator on how much the two-point function varies depending on the position it is evaluated at. We find for the hold times t ¼ 3, 7 and 23 ms deviations from translational invariance of 0.3 Â 10 À 2 , 5.4 Â 10 À 2 and 8.3 Â 10 À 2 , clearly indicating that for longer hold times, our assumption of translational invariance is considerably less accurate. Given these limitations of the data set and the fact that the two-point functions averaged over different positions does not possess a rich enough structure, we feel that using a bond dimension larger than d ¼ 2 would be overfitting. Let us point out that this is by no means a limitation of our method as such, as reconstructions with higher bond dimension could easily be performed using matrix-pencil methods as described in ref. 26.
Quantifying the statistical compatibility and error analysis. To quantify the error of our tomography procedure, we use the relative mean deviation with respect to the fitted (reconstructed) data, where S is the set of all data points x ¼ (x 1 ,y, x n ) with x 1 rx 2 r?rx n , and |S| denotes the number of elements in S. In addition, to estimate the robustness of our algorithm, we employ a bootstrapping method (see, for example, ref. 34). Namely, starting with the reconstructed four-point function from the experimental data, we add Gaussian noise with zero mean and s.d. given by the statistical uncertainties from the experiment. Subsequently, we perform our cMPS tomography procedure and reconstruct the six-point function. We repeated this procedure 100 times and computed the entry-wise relative standard deviation of the six-point functions. For the average over all entries, we obtain a deviation of 1.1% (with a maximum relative s.d. of 2.8%). This confirms that our reconstruction procedure is robust to the errors we expect in the experiment. | 5,297.6 | 2014-06-13T00:00:00.000 | [
"Physics"
] |
Variable frequency of LRRK2 variants in the Latin American research consortium on the genetics of Parkinson’s disease (LARGE-PD), a case of ancestry
Mutations in Leucine Repeat Rich Kinase 2 (LRRK2), primarily located in codons G2019 and R1441, represent the most common genetic cause of Parkinson’s disease in European-derived populations. However, little is known about the frequency of these mutations in Latin American populations. In addition, a prior study suggested that a LRRK2 polymorphism (p.Q1111H) specific to Latino and Amerindian populations might be a risk factor for Parkinson’s disease, but this finding requires replication. We screened 1734 Parkinson’s disease patients and 1097 controls enrolled in the Latin American Research Consortium on the Genetics of Parkinson’s disease (LARGE-PD), which includes sites in Argentina, Brazil, Colombia, Ecuador, Peru, and Uruguay. Genotypes were determined by TaqMan assay (p.G2019S and p.Q1111H) or by sequencing of exon 31 (p.R1441C/G/H/S). Admixture proportion was determined using a panel of 29 ancestry informative markers. We identified a total of 29 Parkinson’s disease patients (1.7%) who carried p.G2019S and the frequency ranged from 0.2% in Peru to 4.2% in Uruguay. Only two Parkinson’s disease patients carried p.R1441G and one patient carried p.R1441C. There was no significant difference in the frequency of p.Q1111H in patients (3.8%) compared to controls (3.1%; OR 1.02, p = 0.873). The frequency of LRRK2-p.G2019S varied greatly between different Latin American countries and was directly correlated with the amount of European ancestry observed. p.R1441G is rare in Latin America despite the large genetic contribution made by settlers from Spain, where the mutation is relatively common.
INTRODUCTION
Mutations in Leucine Repeat Rich Kinase 2 (LRRK2) represent the most frequent genetic cause of Parkinson's disease (PD); there is consistent evidence that at least eight missense variants (p. N1437H, p.R1441C, p.R1441G, p.R1441H, p.R1441S, p.Y1699C, p. G2019S, and p.I2020T) are pathogenic while more than fifty remain of undetermined significance. 1, 2 p.G2019S is the most frequent pathogenic LRRK2 variant in European-derived, Ashkenazi Jewish, and North African populations, however frequencies range from 0 to 42% worldwide depending on the population. 3 While most carriers share a common founder, 4, 5 a small group of patients in Europe and Japan share two different haplotypes, suggesting at least three different mutation events. 6,7 Another mutation, p.R1441G, is almost exclusively restricted to Northern Spain and is thought to have originated in the "Basque" region during the seventh century. 8,9 Only four patients have been reported to carry this variant outside of Spain: one in Mexico, 10 one in the US, 11 one in Uruguay 12 and more recently one in Japan. 13 Three other less common pathogenic variants (p.R1441C, p.R1441H, and p.R1441S) are known to occur within the same codon. 4,14,15 Thus p.G2019S, together with variants in the R1441 codon, represent the two largest mutational hotspots in the gene with highly variable frequencies depending on geographic location and ethnic background. There are other variants in LRRK2 that are population specific and associated with PD risk such as the p.G2385R and p.R1628P single nucleotide polymorphisms (SNPs) in Asians. 16 Little is known about the frequency of these or other LRRK2 variants in Latin American countries. In prior studies of small cohorts from South America our group and others have shown that the frequency of p.G2019S and p.R1441G varies substantially across different countries. 3,12,17,18 In a small pilot study we also observed that the LRRK2-p.Q1111H SNP, which is common in some Latin American populations, occurred at an increased frequency in PD patients, though the difference was not quite significant. 19 Thus, whether this variant represents a PD risk factor in Latino populations remains unclear.
In this study we sought to further elucidate the frequency of the LRRK2-p.R1441G/C/H/S and p.G2019S mutations and the influence of p.Q1111H on disease risk in the largest cohort of Latin American PD patients ever examined.
RESULTS
We identified a total of 29 patients who carried p.G2019S, including one homozygote (from Argentina). p.G2019S frequency varied substantially between sites, and was strongly correlated with the proportion of European admixture observed in representative samples from the corresponding site (Table 1; Fig. 1a). All but two of the carriers had an age at onset over 40 years old (mean 54.7, range 37-72). All carriers reported at least one European ancestor, and only five reported a family history of PD. No sex differences were observed as exactly 50% were females. Nine unaffected relatives of four of the carriers were also found to harbor p.G2019S. Their ages ranged from 19 to 55 years old. Five healthy controls (age at recruitment 39-77) were also found to carry p.G2019S.
Genotyping of rs28903073 showed that all 34 p.G2019S carriers (29 patients and 5 controls) carried at least one copy of the minor allele (A), thus suggesting that they all share the most common haplotype reported among individuals with p.G2019S in Europe and North America. 4 We also identified three patients who were heterozygous for a pathogenic mutation in codon 1441. Two carried p.R1441G (from Uruguay and Peru) and one carried p.R1441C. All three had an age at onset ≤50 years old. Only the p.R1441C carrier reported a family history of PD (Fig. 2b), despite the fact that mutations in this codon are highly penetrant. 20,21 We screened five additional unaffected family members of the Peruvian patient with p.R1441G, including both of his parents who are still alive in their late 80s, and identified three more carriers (Fig. 2a). All carriers from this family shared the same haplotype that has been reported among p.R1441G carriers in the "Basque" region (Supplementary Table 1). We did not find other pathogenic variants in exon 31 for any patients or controls.
We also screened our cohort for the p.Q1111H polymorphism and found 118 patients (10 homozygotes) and 58 healthy controls (3 homozygotes) who carried this variant. The allele frequency was highly variable between sites and was highly correlated with the proportion of Amerindian admixture observed at each site ( Fig. 1b). However, there was no significant difference in p.Q1111H frequency between cases and controls in the combined sample (or by site) after adjusting for age, sex and site ( Table 2).
DISCUSSION
Our findings indicate that the frequencies of the LRRK2 p.G2019S and p.R1441G/C mutations vary widely across countries in Latin America, and are strongly linked to the amount of European admixture existent in each country. We observed a high p.G2019S frequency in Argentina (3.2%) and Uruguay (4.2%), the two countries with the highest proportion of European admixture (>85%). In contrast, the prevalence of the mutation was lower in Peru (0.2%), where European admixture is only about 25%. Frequencies in Ecuador (1.2%), Colombia (1.5%) and Brazil (1.4%) are similar to those observed in other studies in Latin America and in the US. 3 This is the first time that the p.G2019S has been reported in Ecuador.
Our data indicate that all of the individuals in our sample carrying p.G2019S share the haplotype most commonly reported among carriers of this mutation in European-derived, Ashkenazi Jewish, and North African populations. These individuals are all Includes one homozygote c 6 p.G2019S carriers and 1 p.R1441G from Peru and Uruguay were described in a previous publication [12] Frequency of LRRK2 variants in Latin America M Cornejo-Olivas et al.
believed to share a common founder who lived more than 2000 years ago in the Middle East. 4 In contrast to the widespread distribution of p.G2019S, mutations in codon 1441, a mutational hotspot region of the LRRK2 gene, are rare in Latin America. We only identified three carriers, two with p.R1441G and one with p.R1441C. It is interesting that neither of the p.R1441G carriers reported family history of the disease despite the fact that high penetrance rates have been reported for this mutation; 83.4% by the age of 80 years. 20 The p.R1441G carrier from Peru presented with clinically typical PD at the age of 29, which is more than two decades lower than the mean age of onset reported for patients with p.R1441G in other cohorts. 22,23 In contrast, his 89 year old mother and 54 year old brother both carry the mutation which have not shown signs of parkinsonism on serial neurological examinations, demonstrating that penetrance for this mutation can be highly variable. The clinical features of the proband were characterized by gradual progression over two decades, and good response to levodopa with late onset of motor fluctuations. This is consistent with the more benign phenotype previously reported in other PD patients with p.R1441G. 24,25 The reason for the unusually early age at onset of the proband is not clear. 21 He was negative for both point and dosage mutations in the PARK2 gene (data not shown), the most frequent genetic cause of early-onset PD, as well as for mutations in GBA which not only increase risk for PD but also lower the age at onset by at least 4 years. 26 It is possible that environmental exposures or other genetic factors not tested might influence the age at onset in this instance. All of the carriers in this family share the common haplotype which is thought to have originated in the Basque region during the VII century. This agrees with the historic influence of Spanish colonizers in the Peruvian population. 27,28 The other p.1441G carrier has been previously described elsewhere. 12 The low p.R1441G frequency in our sample is somewhat surprising due to the large historical influence of Spain in Latin America. However this is consistent with previous smaller studies in which no carriers were found including patients from Brazil 29 and Chile. 30 This low frequency might be explained by many factors including the possibility that the first Spanish colonizers came from regions in Spain with a low mutation frequency. Including our carriers, only five have been identified outside of Spain. [10][11][12][13] Regarding the other cases outside of Spain, the Uruguayan and Japanese p.R1441G PD cases showed a novel haplotype suggesting a distinct founder effect, 12, 13 while for the Mexican and North American cases no haplotype analysis was reported. 10,11 We also identified a patient carrying p.R1441C, which we believe is the first report of this mutation in South America. Most of the families previously reported with this mutation are from Europe or the US, except for one from Asia. At least four different haplotypes have been observed in individuals with p.R1441C. 31 Our patient reported that his paternal relatives originated in Spain, while his maternal relatives came from Lebanon.
The proband of our p.R1441C family was a 63 year old male who presented with bradykinesia and rigidity in the right upper limb at the age of 44 years. He was started on levodopa therapy 3 years later with a clear symptomatic response. He has never displayed a resting tremor, but does have a slight bilateral postural tremor. These clinical characteristics are somewhat atypical because nearly 60% of patients with p.R1441C report resting tremor, as their initial symptom and <20% of carriers have an age at onset of <50 years. 21,31 The proband developed motor complications after 3 years of levodopa treatment, and now has a very short duration of drug action (90-120 min) and moderate peak-dose dyskinesias with generalized choreic and dystonic movements. He is in Hoehn and Yahr stage III, when in the "on state". No significant cognitive impairment was observed on detailed neuropsychological testing at age 59. He reported nine relatives affected with parkinsonism, however DNA was not available for additional individuals of the family (Fig. 2).
Finally, LRRK2 p.Q1111H was originally reported in two siblings with PD from the U.S., but the pedigree was too small to assess segregation. 32 However this variant was nominated as potentially pathogenic since it was not found in almost 400 non-Hispanic white controls. We later demonstrated that p.Q1111H is a common variant that is restricted to populations of Amerindian origin. 19 In an analysis of 1150 PD patients and 310 healthy controls from Peru and Chile we showed a trend toward an association between p.Q1111H and PD (OR 1.38; p = 0.10). Here we attempted to validate these findings in the largest Latin American PD cohort ever assembled. However, after adjusting for important covariates we observed no association between p.Q1111H and PD (OR 1.02, p = 0.873). This suggests that p.Q1111H is a "benign" population specific SNP.
Human genetics is proving to be a key component of personalized medicine. However, most PD genetic studies have focused on individuals of European origin, and little is known about genetic risk factors and causal genes for PD in other populations. Without such information, it will be difficult to individualize new treatments for PD patients from underrepresented groups, which might further increase existing social disparities. Furthermore, genetic analyses of non-European populations might yield new PD genes that could elucidate novel therapeutic targets to benefit all patients. The data presented here begin to address this gap in knowledge, and we have just initiated large scale studies in the LARGE-PD cohort that will further define the genetic profile of PD in Latinos.
MATERIALS AND METHODS Subjects
We screened a total of 1734 PD patients and 1097 healthy controls recruited in Argentina, Brazil, Colombia, Ecuador, Peru, and Uruguay as part of the Latin American Research Consortium on the Genetics of Parkinson's disease (LARGE-PD). All patients were evaluated by a movement disorders specialist at each of the sites and met UK PD Society Brain Bank clinical diagnostic criteria. The characteristics of this cohort are presented in Table 1. Data for a subset of these subjects has been previously published in analyses of p.G2019S and codon 1441 mutations (n = 365), and for p.Q1111H (n = 940). 12,19 Genetics Genomic DNA was extracted from peripheral blood samples using standard methods. All samples were screened for p.G2019S and p. Q1111H by TaqMan assay and a cluster of four substitutions in codon 1441 (p.R1441C/G/H/S) by sequencing LRRK2 exon 31 using the Applied Biosystems Big-Dye Terminator v3.1 Cycle Sequencing Kit. Sequence data were analyzed using Mutation Surveyor (SoftGenetics, PA). All p.G2019S carriers were verified by Sanger sequencing using the same methods as for exon 31.
Haplotype analyses for p.R1441G were performed using 5 SNPs and 10 microsatellite markers spanning 6 Mb across the LRRK2 region 28 (Supplementary Table 1). The haplotype background for p.G2019S was determined by genotyping rs28903073. The "A" allele for this SNP has a frequency of <0.1% and if observed indicates the presence of the rare haplotype shared by most p.G2019S carriers. 4 SNP markers were genotyped by Sanger sequencing using previously described methods. 9 Microsatellites were amplified by PCR using fluorescently labeled Forward primers, run on an ABI PRISM 3130 Genetic Analyzer, and analyzed using GeneMapper 4.0 software (Applied Biosystems, CA).
We also screened a total of 214 individuals from five of the six participating sites (range, 17-50) using a custom panel of 29 ancestry informative markers (AIMs) to estimate the proportion of admixture for the four continental population groups (Asian, African, European, and Amerindian) present in each site (Supplementary Methods and Supplementary Tables 2 and 3). Genotyping was performed using TaqMan assays on the Fluidigm BioMark HD System. We used STRUCTURE (http://pritch. bsd.uchicago.edu/structure.html) to estimate the percent ancestry for our samples with reference to the HDGP + HapMap Phase III groups.
Statistical analysis
Association of PD with p.Q1111H was assessed by logistic regression analysis under an additive model, adjusting for sex and age. We also included site as a covariate in the analysis of the combined cohort. | 3,586.8 | 2017-06-02T00:00:00.000 | [
"Biology"
] |
Urbanization and development of vertical agriculture in Russia
This paper discusses new technologies for the production of high-quality agricultural products, which are environmentally friendly and the production of which is now associating not only to rural areas. According to the authors, the concept of a digital city is now widening; it includes new trends, when agricultural production extends its boundaries, including vertical agriculture and farming urbanization.
Introduction
The agricultural sector of Russia is characterized by new trends, these are the digitalization of socio-economic activity and the urbanization of agriculture, when, in the context of a decrease in the rural population, part of production is transferred to cities.
Improving the quality of life of the population is one of the key tasks of the socioeconomic development of Russia. To solve this problem, projects such as "smart city/ settlement" or "digital city/ settlement" are being developed and implemented, including such areas as "smart medicine", "smart transport", "smart ecology", "smart environment", "urban agriculture "and others [1,2].
The number of inhabitants of a settlement for assigning it the status of a city is conditional and depends on many factors. Usually, in Russia, settlements with a population of more than 10-12 thousand inhabitants receive the status of a city. However, there are cities with less than 10 and even less than 5 thousand people. At the same time, there are many urban-type settlements and even rural settlements with more than 10 thousand inhabitants.
The objective of this study is to determine the distinctive features of vertical and urban agriculture, in order to assess the trends in the implementation of projects in the federal districts of the country, to compare the factors that positively and negatively affect the urbanization of agriculture.
Related work
Most of the population of Russia is urban dwellers. We note the high proportion of urbanization in the country, which is growing every year. According to our estimates, by 2030 the rural population will reach 23 million people, that is, it will decrease by 4 million people in 20 years (Fig. 1). In the scientific literature, there is a wide discussion of the terminology associated with the concept of urbanization of the agricultural sector (Table 1). There are different interpretations of the concept of urbanization; some researchers consider it as a method, others as a process. The term urbanization is applied not only to territories, but also to areas of economic activity, industries and sectors of the economy, for example, to the agricultural sector, justifying its development as a high-tech cluster, on the one hand, and on the other, separating the link between the concepts of "agricultural production" and "rural territory ". A. Kumar, J. S. Rattan, when describing an ecologically clean city, write about the need to create various agricultural structures, plots within its boundaries, about vertical agricultural buildings of the "agro-tower" type, which will reduce the time of delivery of food to consumers [9].
Thanks to new architectural and engineering solutions, information and communication technologies that ensure the automation of the agro-industrial complex, a number of developed countries are already designing and implementing vertical agriculture. K. Jürkenbeck, A. Heumann, A. Spiller, analysing the concepts and technologies of vertical agriculture and sustainable agriculture, do not find a significant difference in them [10].
Let us highlight the features of urbanization of the agricultural sector: possible lack of binding to the ground; -distribution on the territory of cities; -minimization of the used area, but maximization of the used volume of farms; -creation of digital ecosystems for the management of vertical farms; -using new architectural and engineering solutions, information and communication technologies, methods and technologies of selection for the production of agricultural sector products. According to the authors, urbanization of the agricultural sector is the process of spreading high-tech industries of the agricultural sector in the territory of cities.
Thus, by the sixth technological order, more and more urban dwellers will be involved in the agricultural sector, and there will be a need for specialists mastering in new technologies. However, already now, new solutions are required from the agricultural sector, which only human capital and new technologies could implement. As L. Palmer writes, in the United States there is a post-industrial trend of the emergence of an urban rural population [11].
We can assume that there is a tendency to blur the differences between the rural and the urban population, agricultural and other sectors of production activity, but so far, according to the research of V.A. Ilyin, T.V. Uskova, Russia is far from this trend [12,13]. Nevertheless, at the same time, to develop technological solutions for the implementation of scenarios for the scientific and technological development of agriculture, specialists from various industries are already involved.
Methods
The study refers to assessment the quality of life of the population concerning the possibilities of obtaining environmentally friendly products in the territory of "smart settlements".
One of the areas of "smart agriculture" is the urbanization of agriculture. Let us highlight the factors influencing the formation of "smart agriculture". For this purpose, it is necessary to consider the terminology associated with the development of urban agriculture and the characteristics of urbanization. Then you need to collect and process data regarding the ratio of the share of rural and urban population in the Federal Districts of Russia, and get prognostic estimates until 2030 Information on the presence and distribution of vertical farms, as examples of urbanization in the Federal Districts, we will collect from online Internet resources.
Results and discussion
To determine the differences between vertical and urban agriculture, we considered such characteristics as the goal, main tasks, territorial location, and characteristics of the premises used ( Table 2). The analysis showed that from the standpoint of improving the quality of life, the most attractive features are the creation of an eco-friendly city and the minimization of used land areas. The urbanization of Russian regions is heterogeneous (Table 3). At the same time, due to natural and economic, socio-economic and cultural characteristics, there is an unequal development of vertical agriculture. Tsentralny, Northwestern, Volga, Siberian and Far-Eastern districts specialize in automated vertical farms for the production of vegetables and greens, whereas Uralsky district prefers growing strawberries. There are no vertical farms in the Yuzhny and North-Caucasian districts.
The analysis shows that the highest proportion of the rural population is in the Yuzhny and North Caucasian Federal Districts and in the same districts, automated vertical crop farms are not developing or introducing; however, automated livestock farms are designed (Table 3). This difference is primarily due to the formation of demand, which is fully satisfied in some regions and partially in others.
The structure of costs and, first, the price of electricity (Table 4) have an impact on the promotion of projects of vertical and urban agriculture in the cities of Russia.
* The price is valid for the preferential category, which includes vegetable producers
We can compare the payback periods of projects in cities in different districts. It is obvious that even the highest payback period of 5 years meets the requirements of investors, given that demand is growing and supply is limited. Therefore, we can consider the prospects for the development of automated vertical farms as economically justified.
We believe that when developing and implementing new projects, it is necessary to take into account the factors that positively and negatively affect the formation of an "urbanized agricultural sector" as a direction for the development of a "smart city" (Table 5).
Conclusion
The study refers to assessment the quality of life of the population concerning the possibilities of obtaining environmentally friendly products in the territory of "smart settlements". One of the areas of "smart agriculture" is the urbanization of agriculture. Let us highlight the factors influencing the formation of "smart agriculture". For this purpose, it is necessary to consider the terminology associated with the development of urban agriculture and the characteristics of urbanization. Then you need to collect and process data regarding the ratio of the share of rural and urban population in the Federal Districts of Russia, and get prognostic estimates until 2030.
Information on the presence and distribution of vertical farms, as examples of urbanization in the Federal Districts, we will collect from online Internet resources. | 1,912.4 | 2021-01-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Environmental Science",
"Geography",
"Economics"
] |
State-of-Charge Balancing Control of a Modular Multilevel Converter with an Integrated Battery Energy Storage
With the fast development of the electric vehicle industry, the reuse of second-life batteries in vehicles are becoming more attractive, however, both the state-of-charge (SOC) inconsistency and the capacity inconsistency of second-life batteries have limits in their utilization. This paper focuses on the second-life batteries applied battery energy storage system (BESS) based on modular multilevel converter (MMC). By analyzing the power flow characteristics among all sources within the MMC-BESS, a three-level SOC equilibrium control strategy aiming to battery capacity inconsistency is proposed to balance the energy of batteries, which includes SOC balance among three-phase legs, SOC balance between the upper and lower arms of each phase, and SOC balance of submodules within each arm. In battery charging and discharging control, by introducing power regulations based on battery capacity proportion of three-phase legs, capacity deviation between the upper and lower’s arm, and the capacity coefficient of the submodule into the SOC feedback control loop, SOC balance of all battery modules is accomplished, thus effectively improving the energy utilization of second-life battery energy storage system. Finally, the effectiveness and feasibility of the proposed methods are verified by results obtained from simulations and the experimental platform.
Introduction
With the fast-growing commercial application of electric vehicles, there will be a substantial increase of the batteries retired from these vehicles, leading to a great waste of resources if the batteries are directly thrown out.By expanding the useful life of these retired batteries for second use, the total battery life cycle cost can be easily reduced and the utilization of the battery could also be greatly improved [1], which is of great significance to promote the replaceable developments of the electric vehicle industry.The most economical way of reusing second-life batteries is the battery energy storage system (BESS).In conventional battery energy storage systems, a large number of batteries are connected in series or in parallel in a battery pack, which requires a higher battery consistency in practical applications.However, due to the high capacity inconsistency and high cost of module reconstruction in second-life batteries, the large number of series/parallel applications and "short-board" effect will reduce the total capacity utilization of the energy storage system, which will affect the energy and capacity utilization efficiency.
Flexible group technology is an effective method to solve the problem of the high battery inconsistency [2,3].Different from the conventional battery group composed of a large number of single batteries directly connected in series and in parallel, the flexible group energy storage system is consist of cascaded submodules combining the low-voltage battery pack with converters.Charging or discharging the current of each battery module is controlled independently based on the state parameters, effectively reducing the requirement of battery capacity consistency and the cost of regrouping.Thus, the capacity utilization efficiency and cycle life of batteries can be improved while meeting the requirements of energy storage systems.Consequently, the efficient utilization of the retired power batteries is realized.
Various topologies can be used in flexible group energy storage system [4,5].In the application where power flows among the AC grid, the DC bus and battery, the MMC-BESS has its superior advantages of overcoming the "short-board" effect.By dispersedly connecting the low-voltage battery pack to the DC side of each submodule, this topology combines the merits of both MMC and BESS, which is suitable for hybrid AC/DC micro-grids and high-voltage direct current (HVDC) power system.Meanwhile, advanced modeling [6][7][8], control systems, and modulation [9,10] has developed MMC greatly.The battery capacity utilization of the whole MMC-BESS is limited by the submodule with the highest or lowest state of charge (SOC), therefore the SOC equilibrium control becomes the essential part of improving battery capacity utilization.Since SOC is directly related to battery capacity, the capacity inconsistency can easily result in divergent real-time SOC.When second-life batteries are widely used in battery packs in MMC-BESS, in addition to capacity inconsistency of batteries in the same arm, the total battery capacity between the upper and lower arm as well as the total capacity among different phases are also inconsistent, leading to greater SOC inconsistency at each level of battery modules.Thus, the conventional SOC equilibrium control strategy has limited applications and new control methods are in great urge.
SOC balancing control and fault-tolerant control are essential for the MMC-BESS to improve the efficiency and reliability of capacity utilization.In Reference [11], the zero-sequence voltage injection method is able to balance the SOC among different phases, however, the calculation of zero-sequence voltage injection involves complex mathematical calculation, leading to higher requirement of control hardware.By sorting the SOCs of all submodules, SOC balancing control can be realized using the carrier-based disposition pulse width modulation (PWM) method [12].However, the complexity increases dramatically with the increase of the number of submodules.Reference [13] proposed a simple and easy closed-loop method to achieve SOC balancing among submodules within an arm and phase legs, while the SOC balancing problem between the upper and lower arms is not under full consideration.Some literature focuses on the MMC-BESS applied in vehicles, in which using AC-circulating current to balance SOC between the lower and upper arms; the current only contain positive sequence and negative sequence to protect the current from flowing to the DC source [14].As SOH can also be used to improve utilization of battery, the author adopted dc and ac circulating current as well as modulation index of each submodule to achieve the tracking of SOC, thus effectively improving the cycle time of battery system [15].In [16], the capacity energy in both upper and lower arm can be controlled by adjusting the circulating current after bypassing the fault submodule, resulting in the SOC rebalancing even under fault operation.Reference [17] focuses on the hybrid MMC energy storage system consisting of half-bridge and full-bridge topologies, which highly integrating different voltage and current injection methods for both interphase and intra-phase SOC equalization.Although various SOC equilibrium control methods were proposed in the previous literature, the impact of capacity inconsistency has not been fully considered.In the condition where the inconsistency index goes higher, the control error may turn greater, resulting in lower battery capacity utilization.
To overcome the shortage of conventional SOC equalization methods under the operation of battery capacity inconsistency, after the analysis of the power transfer relationship of MMC-BESS, this paper proposes a three-level SOC balancing control strategy.The SOC closed-loop control strategy is implemented to adjust the power command from phase level to each submodule, and then regular both the DC circulating current and AC current.By respectively adjusting the phase power and arm power, the power of submodules can be reconfigured.To solve the battery capacity inconsistency problem, this paper proposed a novel control method based on power regulations and SOC equalization control to synchronously converge the SOC equilibrium among different battery packs with various battery capacity, and eventually achieves the goal of the same SOC of all battery modules of MMC-BESS, effectively improving the utilization of second-life batteries.Both the simulation model and experimental platform of a three-phase 24-module energy storage system have been established to verify the effectiveness of the proposed control strategy.
Topology and Modulation Strategy
The schematic diagram of the MMC-BESS is shown in Figure 1.Three-phase legs are connected in parallel to a common DC grid and the midpoint is connected to an AC grid through the grid inductor Lg.Each leg consists of upper and lower arms, the arm inductor La and equivalent series impedance Ra.There are N cascaded submodules in each arm, where low-voltage battery packs and half bridge are embedded.The bypass switch of each submodule will be closed once a failure in this submodule happens.The power devices T 1 and T 2 are in complementary operation, which means the submodule cannot output negative voltages.Some researchers add DC-DC converter between battery and half bridge to reduce the current ripple of battery [18].
Power Flow Analysis
MMC-BESS is a three-port power converter system connected to the AC grid, the DC link, and the batteries.The power flow analysis is the basis of the control strategy design.The high-frequency component of the voltage and current is neglected in this paper for better analysis.In the following discussion, j ∈ {a, b, c} represents different phases, k ∈ {u, d} refers to the upper and lower arms of the same phase leg, and i ∈ {1, 2, . . ., N} represents the number of submodules located in one arm.
As described in Reference [14], the output voltage of submodule u jki consists of three parts: the AC grid component u ACjki , the DC circulating component u DCjki , and the AC circulating component u Xjki .Since most of the second harmonic current flows through the batteries, the second harmonic circulating current can be neglected in this system [19].The AC components of the submodules output voltage (including u ACjki and u Xjki ) in the same arm have the same phase angle.To keep the symmetrical system, the total upper arm voltage and the total lower arm voltage should be presented as follows: where u ACj is the drive voltage of the AC current i ACj .u DCj is the DC drive voltage of the DC circulating current i DCj and u DCj ≈ U DC /2 when R a is small enough.u Xj is the drive voltage corresponding to the AC circulating current i Xj .The arm currents i jk are composed of these three components accordingly: i DCj and i Xj compose the circulating current i Zj , which is a common component in both the upper and lower arms: After applying the KVL method to the system in Figure 1, the following relationships are found: Assuming that all submodules in the same arm can be regard as a single module, then the total output active power is equal to the total arm battery power.By multiplying the voltage components in Equation ( 1) and current components in Equation (2) one by one, all of the instantaneous active power and reactive powers can be found.Only the average active power is studied in this paper and the total arm battery power of the upper and lower arms, P Bju and P Bjd , result in the following equations: where θ and ϕ 2 are the phase angles of I ACj , corresponding to U ACj and U Xj , respectively.ϕ 1 is the phase angle of I Xj and U ACj .The third term in Equation ( 5) is the AC circulating power yielded by the AC circulating current I Xj and voltage U Xj .Though I Xj is usually considered the current generating power loss, it is employed to shift power between arms in the same phase.
The expressions of the battery pack power of each submodule in the upper and lower arms, P Bjui and P Bjdi , are similar to Equation ( 5).This results in the following relationships: Comparing Equation ( 5) with Equation ( 6), it is clear that when the magnitude of the three voltage components of U jki is proportional to the corresponding components of the total arm output voltage U jk with the factor k i , the battery power P Bjki is also proportional to the total arm battery power P Bjk with k i .Thus, the battery power of each submodule in an arm can be distributed by adjusting the output voltage ratio k i .
Based on Equation ( 5), the total leg battery power is derived from the following: To keep the grid currents balanced, each grid power P ACj is made equal.Therefore, P Bj can be changed by managing the DC circulating current I DCj .The power transfer between the upper and lower arm batteries can be controlled by modifying the AC circulating current I Xj and the individual battery power in the same arm can be controlled by adjusting the output voltage ratio k i .In this way, the individual power control of each battery pack can be achieved.
SOC Balancing Control Strategy
During the operation of the BESS, the SOCs of the battery packs will gradually become unequal, which will decrease the capacity utilization efficiency of the batteries.Thus, SOC balancing control is essential.The definition of the SOC is given by the following equation: The SOC of each cell is estimated by the following equation: where E B is the battery nominal energy and p B (t) is the instantaneous battery power.E B can be gained by multiplying battery voltage u B and its capacity.
As shown in Equation ( 9), dynamic SOC is a first-order process with integral behavior and the changing rate is directly related to the battery power p B (t). Combining the previous power flow analysis, this paper proposes a three-level SOC balancing control strategy, including the phase legs SOC balancing control, upper and lower arms SOC balancing control, and individual submodules SOC balancing control.Define SOC jk as the mean SOC value of all the battery submodules in the same arm; SOC j as the average value of all battery packs' SOC in the same phase leg and SOC abc as the average SOC for each phase leg.Besides, since the battery capacity inconsistency goes higher, adjusting power based on the battery capacity proportion of three-phase legs, capacity deviation between upper and lower's arm and capacity coefficient of submodule will directly balance the energy of all battery, thus improving the utilization of second-life batteries.
Phase-Leg Balancing
Influenced by different operation modes, the total battery power demand P * B is determined by the DC-link power P * DC and the AC grid power P * AC .The relationship between these three power demands is as follows: P * B is distributed to all battery packs.The basic power demand of each battery pack P Bav is given by the following equation: where n SM is the number of the submodules in the system (6N in the normal operation).The total power reference of the phase leg P * Bj , as shown in Figure 3, is obtained by combining the difference regulated by proportional controller of the average of phase leg SOC abc and the average of all battery packs' SOC in the same phase leg SOC j with the adjustment power based on capacity.
where p phj is the power regulations based on battery capacity proportion of three-phase legs p ∆j is generated via the P controller by According to reference power of each phase, DC circulating can be deduced, The circulating of each phase leg i Zj can be obtained by adding upper arm current and lower arm within the same phase leg, as shown in Figure 4.The DC circulating current i DCj of each leg is obtained through a low-pass filter whose cut-off frequency is less than 50 Hz, and a PI controller is employed to track i DCj , and achieving SOC balance of the phase leg.
Upper and Lower Arm Balancing
As shown in Equation ( 5), the AC circulating current I Xj can convert the power p ∆armjud between the upper and lower arms.The deviation in the power reference is obtained through proportional control for the SOC difference of the upper and lower arms and the different of power reference based on arm capacity, as follows: The deviation between p armju and p armjd is power reference based on battery capacity deviation between upper and lower's arm, the power transfer from upper and lower arm based on capacity is calculated as follow, To prevent from DC-grid current distortion caused by SOC balancing control, the three-phase AC circulating currents should only be composed of positive and negative sequence components, as shown in Figure 5.The calculating method is described in detail in [20], and the magnitude and phase angle of the positive and negative sequence currents are derived from the given power to be shifted between the upper and lower arms.As shown in the lower part in Figure 4, a proportional resonant (PR) controller is employed to adjust I Xj .Equations ( 14) and (16) show that the SOC balancing rate is determined by the proportional coefficients K ph and K arm .However, the value of these coefficients has to be limited in case of over-modulation.
Submodule Balancing
The objective of the former two SOC balancing control methods is to generate equal average SOC of each arm.The SOC balancing of the submodules within an arm is implemented by adjusting the given power of each submodule p * Bjki is shown in Figure 6, as follows: The variable S arm is the power regulating direction.Since battery packs within an arm have the same current direction, the value of S arm is −1 when the total arm battery power p * Bjki is less than zero.Namely, the batteries are in charge and the value of S arm turns to 1 when p * Bjki is greater than zero.
where p ∆jki is generated by a proportional controller as follows: p smjkii is power regulations based on battery capacity coefficient of submodule within the same arm.
By multiplying k i with the arm voltage reference u * jk , the total arm active power of each submodule can be obtained.
The power ratio factor k i is calculated as Similarly, the coefficients need to be limited for avoiding over-modulation.Ignoring the voltage drop in the arm inductors and grid inductors, the limitation of k i can be written as where ∆SOC max is the maximal SOC difference among the SOCs of the battery packs and the corresponding average arm SOC, u Bmin is the minimum battery pack voltage.m is rated modulation ratio, C avr and C max is average capacity and maximum capacity within an arm.In this paper, the direct current control based on the dq axis is employed to control the AC grid's current.The general control structure of the MMC-BESS is shown in Figure 7.
Simulation Results
To demonstrate the feasibility of the proposed SOC balancing control strategy under both normal operation and fault-tolerant operation, a simulation model based on the topology shown in Figure 1 was built in MATLAB/Simulink.Table 1 summarizes the parameters of the simulation model.The initial SOC values of the 24 battery modules are randomly set from 80.0% to 83.0%, and various capacity is preset as listed in Table 2. First of all, only three-level SOC balance is implemented, the DC link absorbs energy from the system and its reference power is kept at −37.5 kW.The AC grid conveys 93.3 kW to the system for 240 s.So, the battery is charged during this time.Then, adding the power regulation based on capacity proportion in the second time simulation, to verify the method proposed in this paper.The power configuration is the same as the first time.Finally, changing the AC power from 93.3 kW to −64.65 kW to test the strategy when the batteries are discharged.Figures 8 and 9 reveal that the control strategy proposed in the paper have less influence on the AC output current and DC-link current.The global and local zoomed-in waveforms of the circulating currents are illustrated in Figure 10.It shows that the SOC balancing control generates a large circulating current at the begin of simulation for both 3-level SOC balance control and power adjustment based on capacity is working, then the circulating decrease gradually, finally it become stable.At the end of simulation, power adjustment based on capacity play great role in the circulating and adjustment from 3-level SOC balance control is little.Figure 11 illustrates the simulation results of the 3-level SOC balancing control without power adjustment in charge mode.Figure 12 illustrates the simulation results of the 3-level SOC balancing control combined with power adjustment based on capacity in charge mode.Figure 13 reveals the simulation results of the 3-level SOC balancing control combined with power adjustment based on capacity in discharge mode.
In Figure 11, only the three-level SOC balance control strategy is implemented.The Figure 11a shows that the SOC of all battery almost converge at last, however, the convergence is poor for the batteries with various capacity.The maximum SOC difference of all battery reduce to 0.6%.Besides, in Figure 11b, the SOC difference of three-phase leg reduced from 0.45% to 0.2%.It is more obvious that the capacity has influence the SOC balance.In Figure 11c,d, the SOC difference of upper and lower arm is less than 0.001% in phase A with same capacity, but it is 0.4% in phase B with various capacity.
In Figure 12, power adjustment based on capacity is added to the simulation and the convergence of SOC gets better contrast with Figure 11.The maximum different SOC of all battery becomes 0.1% at last, and the different SOC of the three-phase leg is reduced to less than 0.01% in Figure 12b.The deviation of upper and lower arm has decreased to 0.05% in Figure 12d.The maximum SOC difference in the upper arm of phase B has also reduced, which is 0.05% less than 0.18% showed in Figure 11f.The obvious contrast of Figures 11 and 12 reveal that the three-level SOC balance control combine with power regulation related to capacity can balance the batteries with different capacity.Finally, the three-level SOC balancing and power adjustment based on capacity in discharge mode is simulated in Figure 13.
Experimental Results
To verify the effectiveness of the proposed control strategy, a prototype was built in the lab as shown in Figure 14.The parameters of the experimental system are shown in Table 3. Due to the large number of submodules, a digital signal processor (DSP) and a field-programmable gate array (FPGA) are employed in this prototype.Since the foundation of the SOC balancing control strategy is the individual battery power control, this paper firstly validates the feasibility of the internal power flow control, then verifies the three-level SOC balance control strategy of MMC-BESS.17 shows the output voltages of the converter, which has nine levels.Figure 18 illustrates the currents of the 9 battery submodules (au, ad, bu, bd, cu1, cu2, cu3, cu4, and cd).The average value of the battery current is analyzed by the scope and marked on the image.It can be seen that the average current of the battery submodules in phase a, b, and c decrease in turn and that the average current of the upper arm is greater than the lower arm in phase b.Meanwhile, the battery currents of the submodules within the upper arm of phase c decrease with an equal difference.Figure 19 illustrates the waveforms of three-phase circulating currents, which contain the DC and AC circulating components.These waveforms are measured at the same time in Figure 17.By comparing the phase angle of the converter output voltages and circulating currents of each phase, 90 • and −90 • is discovered in phase A and phase C, respectively.In this case, the AC circulating currents do not transfer power between the arms within the same phase leg.However, the circulating current in phase B has the opposite phase angle compared with the converter output voltage and thus, the circulating current transfers substantial active power from the lower arm to the upper arm of phase B. Since the total battery power reference of each phase leg has decreased from phase A to phase C, the DC components of the circulating currents decrease correspondingly.If the power of the DC link greater than the AC side, the batteries will be charged, and vice versa.Since each battery module has an individual reference power, the total battery reference power of each phase and each arm will be different.
Figure 20 shows the experimental results of the SOC balancing control of the three-level MMC-BESS.Owing to the limitation of experimental conditions, the battery module capacity is basically the same, the results under various batteries capacities has been verified in the previous simulation.Figure 20a shows the SOC of all the battery modules in the system.The SOC difference of the battery decreases from 15.64% (at the beginning state) to 1.67% (after 40 min), which verifies the effectiveness of the proposed balancing strategy.Figure 20b,c represent the trend of the three-phase SOC and the bridge arm SOC, respectively.In a certain period, the three-phase and bridge arm SOC also tend to be consistent.The Coulomb integral method was used in this strategy for SOC estimation due to some inevitable error existing in the current sampling.However, there is still a little deviation in the SOC estimation, which will affect the SOC convergence results at the end of the equilibrium process.However, this experiment generally conforms to the theoretical expectation and verifies the correctness of the theory.
Conclusions
This paper focuses on the second-life battery used in MMC-BESS, and presents the shortcomings of both SOC and capacity inconsistency.The internal power flow in the AC grid, battery and DC link is analyzed.The results show that the fundamental component of both DC and AC circulating current can be used to adjust the total battery power of phase legs and arms respectively, and the power of submodules can be changed by adjusting submodule output voltage.Based on the above-mentioned results, a three-level SOC balance control strategy is proposed: adjusting the power related to the capacity ratio of three-phase leg; considering the difference of the upper and lower leg's capacity, and the proportional capacity of the submodule collaborating on closed-loop control of the SOC to balance the SOC of MMC-BESS.Eventually, the batteries' SOC balance in MMC-BESS is achieved.Finally, the effectiveness and feasibility of the proposed methods are verified by results obtained from simulations and the experimental platform.
Figure 1 .
Figure 1.The configuration of the MMC-BESS and its submodule.
Figure 4 .
Figure 4. Block diagram for circulating current control.
Figure 5 .
Figure 5.The upper and lower arms SOC balancing controller.
Figure 7 .
Figure 7.The general control structure of the MMC-BESS.
Figure 8 .
Figure 8.The AC grid output currents.
Figure 10 .
Figure 10.The circulating currents of the three-phase legs.
Figure 11 .
Figure 11.Three-level SOC balancing without power adjustment in charge mode.(a) The SOC of all 24 battery modules.(b) The SOC balancing among the three phase legs.(c) The SOC of upper and lower arm within phase A. (d) The SOC of upper and lower arm within phase B. (e) Submodule SOC of upper arm in phase A. (f) Submodule SOC of upper arm in phase B.
Figure 12 .
Figure 12.Three-level SOC balancing with power adjustment based on capacity in charge mode.(a) The SOC of all 24 battery modules.(b) The SOC balancing among the three phase legs.(c) The SOC of upper and lower arm in phase A. (d) The SOC of upper and lower arm in phase B. (e) Submodule SOC of upper arm in phase A. (f) Submodule SOC of upper arm in phase B.
Figure 13 .
Figure 13.Three-level SOC balancing and power adjustment based on capacity in discharge mode.
Figure 15 .
Figure 15.The experiment result of the AC output current.
Figure 16 .
Figure 16.The waveforms of the DC-link voltage and current.
Figure 17 .
Figure 17.The waveforms of the converter output voltages.
Figure 18 .
Figure 18.The currents of the battery modules.
Figure 19 .
Figure 19.The waveforms of the three-phase circulating currents.
Figure 20 .
Figure 20.The experimental results of the three-level SOC balancing.(a) The SOC balancing of each module.(b) The interphase SOC balancing.(c) The intra-phase SOC balancing.(d) The SOC balancing of the upper arm of phase B. (e) The SOC balancing of the lower arm of phase B.
Table 1 .
The parameters of the simulation system.
Table 2 .
The initial SOCs and capacity of the 24 battery modules.
Table 3 .
The parameters of the experimental system.Figures 15 and 16 show the waveforms of the grid current and the DC-link voltage and current.Figure | 6,405 | 2018-04-09T00:00:00.000 | [
"Engineering"
] |
Smart Animal Repelling Device: Utilizing IoT and AI for Effective Anti-Adaptive Harmful Animal Deterrence
. The coexistence of human populations with wildlife often leads to conflicts in which harmful animals cause damage to crops and property and threaten human welfare. Certain limitations influence the effectiveness and environmental impacts of traditional methods used to repel animals. The present research outlines a growth of solutions that utilize the Internet of Things and machine learning techniques to address this issue. This study centers on a Smart Animal Repelling Device (SARD) that seeks to safeguard crops from ungulate assaults, substantially reducing production expenditures. This is achieved by developing virtual fences that use Artificial Intelligence (AI) and ultrasonic emission. This study introduces a comprehensive distributed system for resource management in Edge or Fog settings. The SARD framework leverages the principle of containerization and utilizes Docker containers to execute Internet of Things (IoT) applications in microservices. The software system inside the suggested structure can include various IoT applications and resources and power management strategies for Edge and fog computing systems. The experimental findings demonstrate that the intelligent animal-repellent system effectively uses animal detection on power-efficient computational methods. This implementation ensures the system maintains high mean average accuracy (93.25%) while simultaneously meeting real-time demands for anti-adaptive harmful animal deterrence.
Introduction to Animal Deterrence and Repelling Device
Implementing efficient Animal-repelling devices has become more crucial due to the escalating confrontations between humans and animals, which present substantial risks to agricultural output and human welfare.Traditional approaches have constraints in terms of both effectiveness and ecological sustainability.As a result, there has been a change in the process of finding creative solutions, focusing on incorporating sophisticated technologies like the Internet of Things (IoT) and Artificial Intelligence (AI) [2,3].These technologies deter animals and effectively respond to their evolving behaviors in real-time.
One of the critical obstacles associated with conventional approaches is their limited capacity to accommodate the varied behaviors shown by noxious fauna.Traditional methods of using fear tactics or physical obstacles often have slight effectiveness when used against animals with rapid adaptability toward these deterrents [4].The magnitude of these conflicts has a significant quantitative effect since the agricultural sector worldwide experiences an approximate yearly loss of $50 billion as a result of damage caused by animals.
Recognizing the dynamic nature of wildlife behavior underscores the crucial need for Anti-Adaptive Harmful Animal Deterrence [5].A quantitative evaluation indicates that the adaptive actions shown by detrimental animals result in a 20% escalation in the magnitude and occurrence of agricultural damages.This highlights the pressing need to implement intelligent and responsive deterrent measures.
The convergence of the IoT and AI presents a transformative change in the methodology used for animal deterrents [6].The potential effect is shown by the findings of pilot studies that have used clever animal-repelling devices, which have revealed a significant decrease of 30% in crop losses.These systems use AI to enable continuous monitoring and decisionmaking processes, resulting in a 95% accuracy rate in detecting and deterring dangerous wildlife.
Traditional approaches need to be revised to address flexibility and also present environmental challenges [7,8].The impact of chemical deterrents and physical barriers on the environment shows a notable rise of 15% in soil degradation and water contamination.Using the IoT and AI in agriculture leads to adopting precision farming techniques, mitigating environmental consequences.This is achieved via a notable reduction of 25% in the utilization of chemicals and a corresponding drop of 30% in the consumption of resources.
The main contributions are • The Smart Animal Repelling Device (SARD) integrates a Passive Infrared (PIR) sensor, solar panel, and Low Range (LoRa) technology for real-time, energy-efficient animal detection.
• Using Single Shot Multibox Detector (SSMD) with the Recursive Convolutional Neural Network (R-CNN) model on edge devices improves the accuracy and speed of realtime animal identification.
• Re-identification designs, the system architecture ensures effective animal deterrents via identity association, monitoring, and timely alarms.
The following sections are organized in the given manner: In Section 2, a thorough literature review is provided, and current research and methodology in the field of animal deterrence are discussed.Section 3 presents the SARD to effectively discourage animals, including its characteristics, construction, and integration of AI and IoT.Through practical trials, the performance and effectiveness of the intelligent animal-repellent system are shown in Section 4, which also gives the experimental analysis and results.In Section 5, the research ends with a summary of the significant conclusions, their ramifications, and some suggestions for future advancements and improvements in animal deterrence.
Literature Survey and Analysis
This section examines previous studies and methodology in animal repellent research, comprehensively analyzing conventional strategies and their constraints.This paper critically analyzes the effectiveness and environmental consequences associated with traditional procedures, establishing a foundation for introducing novel solutions in later parts.Adami et al. (2021) introduced the Embedded Edge-AI-based Intelligent Animal Repelling System (EEAIRS), which incorporates many functionalities such as real-time animal identification, LoRa communication, and a solar-powered design [9].The experimental findings revealed that the system achieved an accuracy rate of 87% in detecting animals, significantly reducing false alarms by 30%.The system had a success rate of 92% in repelling animals and exhibited a 15% drop in power consumption.These results provide strong evidence supporting the usefulness of the system.Dampage et al. (2021) presented the Automated Virtual Elephant Fence (AVEF), a novel system incorporating detection, alerting, and coordinated redirection mechanisms [10].The study showed a methodology that effectively demonstrated the real-time identification of elephants, integration with warning systems, and coordinated redirection strategies.These efforts yielded significant outcomes, including an 80% decrease in instances of elephant invasions, a detection accuracy of 95%, a 70% reduction in false alarms, and a 20% improvement in total crop output.
Anitha et al. (2021) introduced a novel agricultural method called the Peacock Repellent Technique (PRT) [11].This technique incorporates ultrasonic emission and intelligent identification systems to safeguard crops.The experiment results showed that the suggested approach had an 88% success rate in deterring peacocks, resulting in a 25% decrease in crop damage.The system exhibited a 95% accuracy in identifying peacocks, leading to a 20% improvement in crop output.These findings underscore the effectiveness of the proposed approach.
Balakrishna et al. (2021) introduced a Crop Protection System (CPS) that utilizes the IoT and machine learning techniques to mitigate animal infiltration [18].The approach integrated real-time animal identification, adaptive machine learning, and streamlined communication.The experimental findings demonstrated high accuracy, with animal detection achieving a rate of 92%.The system showed a notable improvement in reducing false positives by 25%.The success rate in repelling animals reached 85%, indicating a significant level of effectiveness.The system demonstrated a considerable drop in resource consumption, reducing 30%.These results together establish the efficacy of the system [19].Simla et al. (2023) presented the Agricultural Intrusion Detection (AID) system, which incorporates the IoT with deep learning techniques using the Enhanced Lightweight Mchine to Machine (M2M) protocol [13].The methodology encompasses real-time intrusion detection, deep learning models, and lightweight communication techniques.The experimental results revealed a 94% accuracy in detecting intrusions, a 20% decrease in false positives, a 90% effectiveness in deterring animals, and a 25% reduction in communication latency, thereby emphasizing the efficacy of the suggested system.Moallem et al. (2021) introduced an Explainable Deep Vision System (EDVS) designed for animal categorization and detection in trail camera photos [14].The approach integrates interpretable deep learning models and automated post-deployment retraining, resulting in an 85% accuracy in animal classification, a 30% decrease in misclassifications, an 80% detection success rate, and a 15% enhancement in model interpretability.These outcomes underscore the significance of interpretability and efficacy within the system.
Thangavel et al. ( 2022) proposed an IoT-based Embedded System designed to address the issue of human-wildlife conflicts via animal identification and discrimination [15].The proposed approach incorporates real-time detection, discriminating algorithms, and IoT connectivity.The experimental results demonstrated high accuracy (92%) in animal discrimination and a notable decrease (18%) in false alarms.The embedded system exhibited a high success rate (88%) in repelling animals while reducing communication overhead by 22%.Gülcü (2022) proposed introducing an Enhanced Animal Migration Optimization Algorithm (IAMOA) as a means of training feed-forward artificial neural networks [16].The presented approach exhibited improved convergence, resulting in a noteworthy 25% decrease in the duration of training, a notable 30% enhancement in the rate of convergence, a commendable 85% success rate in the optimization of neural networks, and a significant 20% augmentation in the generalization capability of the network.2023) introduced an Intelligent Framework designed to identify and notify instances of cattle posing a risk by resting on roadways, using surveillance footage as the primary data source [17].The methodology encompasses real-time identification, alert creation, and surveillance footage analysis.The experiment's results demonstrated a 96% accuracy in identifying hazardous conditions, a drop of 22% in false alarms, a 92% success rate in issuing warnings, and a 28% reduction in reaction time, thereby confirming the efficacy of the intelligent framework.
The literature review highlights the difficulties associated with conventional approaches to animal repulsion, including their restricted flexibility and potential environmental implications.The various methods and technologies in the examined articles, such as edge-AI systems, IoT-based embedded responses, and explainable deep imaging systems, highlight the importance of new solutions in addressing the identified requirement [12].
Proposed Smart Animal Repelling Device
This section highlights the essential elements of the system, including ultrasonic emission, PIR sensors, and edge computing devices, which enable real-time animal detection.Integrating IoT and AI technology augments the device's capacity to adapt and respond, fortifying its efficacy in safeguarding from detrimental fauna.The experimental configuration and technique are thoroughly described, demonstrating the precision, effectiveness, and adeptness in managing system resources.The section finishes by analyzing the SARD architecture's prospective uses and future advancements.The system architecture of the SARD is shown in Figure 1.The system uses IoT, edge computing, and AI algorithms.
Intelligent device
The system is founded upon Smart Animal Repelling Gadgets that provide the instantaneous identification and deterrence of animals.To achieve this objective, a revised iteration of the Animal Repelling Devices has been incorporated with a compact and high-performance edge computing gadget that operates on Convolutional Neural Network (CNN) technology.
Identification and Ultrasound Production
The fundamental design of the Animal Deterring Device's board remains the 64-bit ARM CortexR M0+ core, operating at 24MHz.It has 64KB of RAM and 256 KB of flash storage.The device incorporates a LoRa and an XBee radio component.These modules are compliant with the LoRaWAN and IEEE 802.15.4 standards.The gadget utilizes a photovoltaic panel and lithium polymer batteries that are charged via a Maximum Power Point Tracker (MPPT) system.The device is outfitted with a PIR sensor capable of detecting targets and initiating the animal identification feature.The tweeter generates ultrasonic with a power level of 120dB at a distance of about 1 meter, covering a broad frequency range from 17kHz to 28kHz.The frequencies are adjusted according to the specific animal to deter it.
Real-time detection
Several edge computing units have been considered to successfully implement the animal identification model and enhance its real-time efficiency.These devices include the Raspberry Pi 3B+ with or without the Intel Movidius Neural Compute Stick (NCS) and the NVIDIA Jetson Nano.The Intel Movidius NCS is the first iteration of the neural computing sticks, an integrated AI platform developed by Movidius.The Universal Serial Bus (USB) hardware acceleration is specifically designed to enable low-power devices to attain elevated rates of frames.The core component of this gadget is the Myriad 2 Visual Processor Unit (VPU) machine, an AI-optimized semiconductor designed to enhance vision computing using R-CNN.The Intel Movidius NCS has a USB 3.0 interface, enabling convenient connectivity to edge devices like the Raspberry Pi.
The NVIDIA Jetson Nano is a contemporary addition to the series of Jetson systems developed by NVIDIA.The NVIDIA Jetson Nano is a compact, high-performance, embedded computer with a specialized Graphics Processing Unit (GPU) to facilitate hardware acceleration.The system operates numerous neural networks concurrently and handles multiple high-resolution sensors, delivering high-performance computation with power consumption ranging from 5W to 10W.
Integration among animal identification and repelling device
Upon motion detection by the PIR detector, the microprocessor transmits an "activity identification" signal to the edge gadget over the Xbee radio connection.It is essential to acknowledge that the edge computing equipment is integrated with the XBee radio, enabling it to be incorporated with IEEE 802.15.4 features.The edge computer initiates the camera and then runs its R-CNN program to recognize the goal precisely.If an animal is spotted, a message is sent to the Animal Repelling Devices, specifying the appropriate ultrasound variety to be produced based on the animal's categorization.The information denoted as "activity" is transferred from the repeller gadget to the LoRa gateways using LoRa technology.The LoRa gateways pass the packet containing the data to the servers.
R-CNN Model
The object identification method used in this study utilizes the R-CNN methodology, which incorporates deep models.The R-CNN consists of four primary components: selective searches, trained R-CNN, class forecasting, and limit box predictions.Particular searching is used to identify high-quality recommended areas from the input photos.These regions exhibit diverse dimensions, sizes, and forms.The pre-trained R-CNN is positioned amid the selective searching process and the output stage.The pre-trained R-CNN utilizes forward computing to extract output characteristics.This process involves gathering input as the network needs across the suggested area.In object classification, a methodology was used whereby several Support Vector Machines were taught.Each machine was trained to utilize the indicated regions of both features and their corresponding labeled categories.The fundamental boundary box predictions are developed using each suggested area of both characteristics and labeled boundary box equipment part.This framework is then connected with a voltage regulator, Pi Camera, light bulbs, WiFi, and Buzzer.The software component responsible for the hardware functionality is implemented using embedded C programming language.The visual forecast is achieved using machine learning algorithms such as R-CNN and SSMD, which facilitate object identification and enable the forecasting of animal species.
SSMD Model
The SSMD architecture comprises many elements, including a base network block and multiple multiscale featured blocks, as seen in Figure 2.
Fig. 2. The SSMD model design
The initial images' characteristics are recovered using the base networks block built on R-CNN.More anchor boxes are constructed using the characteristic map to identify tiny items within the source photos.Multiple multiscale distinct pieces are used to decrease the dimensions.The multiscale distinctive blocks are used to identify objects of varying sizes using the expected boundaries and anchor points.The scale value of every characteristic map level in the SSMD is determined by human definition.The Conv43 algorithm identifies and classifies items within a given dataset.It begins its detection process by considering objects with a minimum value of 0.2 and then progresses linearly until it achieves a maximum weight of 0.9.The length and width are determined by mixing the scale factor with the intended proportional value, as seen in Equations ( 1) and ( 2).The dimension ratio is set to a value of 1.
Identity Association and Tracking
If an animal traverses the area encompassed by two cameras, there exists the potential for it to be erroneously identified as two distinct creatures.It is essential to ascertain if two things in the same category are identical entities.This determination would assist the system in accurately quantifying the discovered animals and effectively monitoring their movements.
The monitoring process inside one camera employs Interaction over Union (IoU) based match to track people.The re-identification algorithm's use to extract features is limited to instances when animals transition between different cameras.After recognizing the object in the picture, the specific region encompassing the animal is subjected to the re-identification networks.The resulting feature vector is then contrasted with the vectors of characteristics that are recorded in a dataset.If the dataset has no entries, it is inferred that the animal is being seen for the initial time, necessitating a new identification.If the dataset lacks content, a new identity is allocated, as shown in Equation ( 4).
𝑙𝑙(𝑦𝑦, 𝑓𝑓 𝑥𝑥
The Euclidean length among the characteristic vectors of two pictures, denoted as (, ) is calculated as the space among the distinct vectors of images x and y.The characteristic vector q represents the characteristic vector of the analyzed picture, whereas ∈ represets the repository of distinct vectors.The values and represent the average and standard deviation of the inter-class Euclidean separation.These values are determined individually for every species using the training set.The calculation of involves the measurement of the lengths among each person in the training set and all other individuals, followed by the computation of the mean of these lengths.The approach allocates a fresh identity in cases when the Euclidean length among the current picture and the most similar image in the database surpasses two standard errors from the average inter-class distance computed throughout the learning set.In other scenarios, the attribution of the nearest corresponding animal's identification is allocated.The database is modified in both systems by including the characteristic vector and its accompanying identity.This information will then be used for future matching of features purposes.The current process for identity determination is substituted with an alternative dynamic strategy, such as employing AI.The animal's motion about the gadget (i.e., left, right, or inward) is determined using linked identities.The detection of inward movement is crucial to provide timely warnings and alerts.The calculation for the inward motion of frame x, , is determined by Equation (5).
The variable represents the area of the limiting box for screen x, whereas denotes a threshold value.If the variation in the size of the boundaries at different time steps exceeds a particular threshold value, represented as , it is classified as migrating inward.This methodology is agnostic to the animal's size and can monitor both diminutive and substantial , 050 (2024) BIO Web of Conferences MSNBAS2023 https://doi.org/10.1051/bioconf/2024820501414 82 animals.A comparable threshold, denoted as , is used to ascertain the trajectory of lateral motion.The gadget is designed to notify the central computer if it detects the inward movement of the animal toward itself.The central computer turns on the deterrence device, including emitting noises or flashing lights, among other possibilities.When enabled, the system observes the animal's response and determines if it has shown movement away from its original position.The central server collects data from various sources and disseminates user alerts as required, notifying users of potential hazards.The notification includes the estimated geographical coordinates of the animal, recent instances of animal detection, the respective species of the identified mammals, the timestamp of the most recent sighting for each animal, and a projection of the path in which they are moving.Providing such data is of utmost importance in facilitating prompt and resolute measures.
Fig. 3. Architecture of the proposed animal detection system
The suggested system for identifying and preventing the possibility of animal incursion is shown in Figure 3.The distributed design has many components, including the camera and computing gadgets, a possible edge computer, and the central server.The terminal points are strategically positioned on-site at meticulously chosen sites to optimize the extent of protection provided to the susceptible regions.The interconnection between gadgets and sensors is established via physical cables.These devices and edge or centralized servers interact wirelessly using WiFi or cellular networks.The central server functions by establishing communication with all devices that have been registered and, after that, checks their operational status and detects any current issues.The tasks of edge servers are comparable to those of central servers.center can collect and utilize user data and assumes the duty of transmitting notifications and messages from users via appropriate communication channels.This section presents the SARD, which utilizes ultrasonic emission, PIR sensors, and edge computing to provide real-time animal detection.The amalgamation of the IoT and AI fosters improved flexibility, as shown by comprehensive experimental configurations highlighting the precision and efficiency of the system's use of resources.The section finishes by emphasizing the possible applications and future directions for developing the SARD architecture.
Simulation Results and Findings
The software necessities for the suggested study involve Python 3.8 for computing the edge gadgets, TensorFlow 2.5 for implementing models using deep learning, and Docker for containerization, allowing the smooth deployment of the IoT services.The animal recognition methods were rigorously tested using MATLAB R2021b, a modeling tool.The dataset used for model training consisted of 10,000 photos of different dangerous species, ensuring robustness.The hardware needs for edge computing include using a Raspberry Pi 4 Model B, equipped with a quad-core ARM Cortex-A72 CPU operating at a frequency of 1.5 GHz.The system should possess 4GB of RAM and a 64GB microSD card.These specifications are essential to achieve optimal efficiency in various real-world settings.obtained over all days and techniques, is 91.88%.This persistent performance highlights its efficacy in real-time animal identification and repulsion.
Fig. 5. Precision analysis of animal detection and repelling mechanism
The Precision findings are shown in Figure 4, illustrating the level of accuracy in positive predictions relative to all anticipated positive cases.The computation involves dividing the number of true positive predictions by the total of true and false followed by multiplication by 100.The mean accuracy across all days and techniques is 87.37%.This indicates its efficacy in detecting and deterring hazardous animals while minimizing false positive results.The F1 Score findings are shown in Figure 7, which illustrates the harmonic mean of accuracy and recall.The F1 score is determined by multiplying the accuracy and recall, dividing the result by the total precision and recall, and multiplying by 2 and 100.The mean F1 Score, calculated over all days and techniques, is 89.83%.This suggests that the system achieves a balanced performance in terms of accuracy and memory, hence effectively facilitating animal identification and repelling.
The suggested SARD technique exhibits a notable level of performance, as seen by its average accuracy, precision, recall, and F1 Score of 91.88%, 87.37%, 91.01%, and 89.83%, respectively.These findings underscore the system's efficacy in real-time animal identification and repelling.The results obtained from the suggested SARD technique demonstrate its capacity to attain a well-rounded performance, leading to an efficient and dependable system for mitigating the presence of detrimental animals in agricultural environments.
Conclusion and Future Study
Deterring animals is essential in protecting agricultural areas from the negative consequences of wildlife encroachment, which result in crop destruction and financial setbacks.Conventional approaches are constrained, necessitating the development of novel alternatives.SARD is suggested to use the capabilities of the IoT and AI to establish a deterrent system that effectively and efficiently repels hazardous animals that have developed adaptive behaviors.The SARD gadget incorporates advanced technology, including an intelligent animal deterrent device with real-time animal identification capabilities.The gadget utilizes ultrasonic emission as its primary mechanism, facilitated by a resilient ATSAMD21G18A core.It incorporates a LoRa module and a PIR sensor to detect and identify targets.The implementation uses edge computing devices for real-time animal identification, including the Raspberry Pi 3B+, Intel Movidius NCS, and NVIDIA Jetson Nano.The seamless integration of animal detection and repelling functionalities allows prompt and precise reactions to identified dangers.The testing results demonstrate the effectiveness of the SARD system, as shown by notable numerical metrics, including an average accuracy of 91.88%, precision of 87.37%, recall of 91.01%, and an F1 Score of 89.83%.The findings highlight the system's efficacy in practical situations, stressing its dependability in detecting and preventing hazardous fauna.
There are still obstacles to overcome in refining the ultrasonic frequencies and enhancing power efficiency, which presents opportunities for future investigation.The augmentation of the system's functionalities to accommodate a wide range of terrains and weather situations will significantly bolster its practicality.The prospects include investigating more sophisticated artificial intelligence models, integrating supplementary detectors for environmental tracking, and expanding the system's capacity to accommodate extensive agricultural landscapes.
,
050 (2024) BIO Web of Conferences MSNBAS2023 https://doi.org/10.1051/bioconf/2024820501414 82 They are specifically used to accommodate extensive deployments wherein the quantity or distribution of local gadgets surpasses the capacity of one central server.The reemergence model specifically designed for each animal category is maintained inside the servers.The computer can use persistent memory, such as NoSQL records, which facilitates rapid retrieval of vectors of features and keeps a record of animal detection histories per device: the subject tracker stores animal movement data and the current count of active findings in each registration gadget.The alert , 050 (2024) BIO Web of Conferences MSNBAS2023 https://doi.org/10.1051/bioconf/2024820501414 82
Fig. 4 .
Fig. 4. Accuracy analysis of animal detection and repelling mechanismThe results of the metrics are shown in Figure4, wherein Accuracy is the ratio of accurately recognized instances to the total number of cases.The computation involves dividing the number of accurate forecasts by the total number of predictions, followed by multiplication by 100.The mean accuracy, calculated by averaging the accuracy values | 5,639.2 | 2024-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
BIFURCATION ANALYSIS OF THE DYNAMICAL SYSTEM FOR A THREE-LAYERED VALVE WITH PERPENDICULAR ANISOTROPY
The features of switching dynamics in a model of a three-layered valve have been investigated theoretically and numerically. For this purpose, the system of ordinary differential equations in the approximation of the uniform magnetization distribution for the magnetization dynamics in the valve with perpendicular anisotropy was derived. It was shown that in such a system, in contrast with the system for the in-plane anisotropy, there are only two equilibrium positions of the magnetization vector. The stability analysis of the stationary points of the system has been carried out. With its help, the classification of types of dynamics versus field and current values was performed. The regions of limit cycles existence and the regions of optimal magnetization switching were revealed. The magnetic random access memory (MRAM) attracts the attention of electronics engineers due to its high speed of magnetic switching, low energy consumption, high data storage density, and reliability. In the basic publication by J. Slonczewski (1996) [1], the first suggested model is composed of two ferromagnetic layers with in-plane anisotropy separated by a nonmagnetic interlayer. However, both calculations and experiments showed that switching currents in the classical ferromagnetics, such as cobalt and iron, are too high. It was found experimentally that the most optimal materials for the ferromagnetic layers are the ferromagnetic alloys FeCoB, and the most promising design solution is a memory cell with perpendicular magnetic anisotropy of the ferromagnetic layers [3–7].
The magnetic random access memory (MRAM) attracts the attention of electronics engineers due to its high speed of magnetic switching, low energy consumption, high data storage density, and reliability.In the basic publication by J. Slonczewski (1996) [1], the first suggested model is composed of two ferromagnetic layers with in-plane anisotropy separated by a nonmagnetic interlayer.However, both calculations and experiments showed that switching currents in the classical ferromagnetics, such as cobalt and iron, are too high.It was found experimentally that the most optimal materials for the ferromagnetic layers are the ferromagnetic alloys FeCoB, and the most promising design solution is a memory cell with perpendicular magnetic anisotropy of the ferromagnetic layers [3][4][5][6][7].
Model
The object under study is a three-layered valve structure consisting of two ferromagnetic layers separated by nonmagnetic one.The cross-section of the structure is the s s d d ´square; the thickness of the thin layer is 1 d .These dimensions assumed to be small enough to use the uniform distribution of the magnetization in the layer.The magnetization of the reference (thick) layer is fixed and directed from the thick to the thin layer.This direction is taken as positive for the OZ-axis, which is perpendicular to the layers.
The current passes in parallel to OZ-axis.Its density J lies in the interval from 0 to 10 13 A/m 2 .The structure is placed into the external magnetic field parallel to OZ.This field can be either positive or negative (Fig. 1).
Basic Equations
The theory of the phenomenon is proposed by J. Slonczewski [1].The model is based on the fundamental Landau-Lifshits-Gilbert equation that describes the dynamics of the magnetization vector M in the free ferromagnetic layer: Here where H is the external magnetic field, a H is the effective field of the magnetic anisotropy, f H is the effective demagnetizing field that appears because of the finite sizes of the valve, c H is the effective field originated due to the spin-polarized injection current.The exchange interaction is ignored due to the small sizes of the structure (the approximation of uniform magnetization).We supposed the anisotropy effective field and the magnetization of the fixed layer to be perpendicular to the valve layers.At that, the magnetization of the fixed layer is supposed to be directed to the free layer of the valve (the vector s in Fig. 1, | | ).The magnetization of the free layer at the first moment is taken to be co-directed with the magnetization of the fixed layer (parallel orientation).After the simultaneous field and current engaging, it can pass to another stationary position.It is natural to name the parallel orientation of the magnetization vectors as "zero", and the anti-parallel orientation as "one" (or vice versa).
Equation ( 1) can be written in the dimensionless form , time t% is measured in the units ( )
and eff a f c . h = h + h + h + h
1) In the general case, the external field h can be presented as .
In the case under consideration, the external field is parallel to OZ-axis, i.e. .
2) The anisotropy field is also oriented along OZ-axis а ( , ) , , and a K is the anisotropy constant.
3) The demagnetizing field f h is defined as f ˆ, = -h qm where q is a tensor (form-factor).In the geometry of this model we can put (see [2], for example) f .z m =z h e 4) According to the Slonczewski-Berger theory, the contribution into effective field caused by the spin- where s is the unit vector of the spin polarization, the direction of which is coincident with the direction of the magnetization in the thick layer (in this geometry z º s e ), J is the dimensional density of the spinpolarized current, n J is the normalization coefficient, which is equal to ( h is the Plank constant, 0 m is the magnetic constant, е is the charge of the electron, 1 d is the thickness of the free ferromagnetic layer).Hence, the dimensionless current density is / n j J J = .According to [1], the scalar dimensionless function G (m) is as follows: where g Gj = .After certain algebraic transformations of (3), we obtain where In the coordinate form, the vector equation ( 6) is equivalent to the system 2 2 The parameters of the Co/Cu/Co structure, which were used in the numerical calculations, were 0.02, a = 0.35, P = The first step to the qualitative classification of the dynamical regimes in the system (7) is to find the equilibrium states of the magnetization in the free layer of the valve.With this purpose, the right-hand parts of (7) must be equated to zero: where ( , ) L = m h .As a result, we obtain the algebraic fractionally-rational set of three equations for three variables , , x y z m m m .Parameters , , P k a are thought to be fixed (internal), whereas the field h and the current density j are scalable (in other words, external or control ones).Solutions , , x y z m m m of (8) at the current values , h j are nothing else but the stationary points of (7).At 0, 0 h j = = , the system (8) degenerates to the form -, and the system (7) has two stationary points on the surface of the unit sphere with coordinates 1,2 (0, 0, 1) T ± , and the singular line, which coincides with the equator of the sphere.
To find the singular points at the nonzero external parameters, it is necessary to solve (8) in the general form.It is not difficult to show that it has no other roots, except 1,2 (0, 0, 1) T ± .
Type and stability of singular points
1) Point 1 (0, 0, 1) T + .The matrix of the system (7) linearized in the vicinity of the singular point 1 (0, 0, 1) The product of the eigenvalues 1 l + and 2 l + is ( ) therefore, this singular point cannot be of a saddle-type, but is only a focus or a node.
In the plane of the control parameters ( , ) h j , line 1 L separates the regions II and III, where the directions of the trajectories rotation in the vicinity of the singular point are opposite.Exactly in the line 1 L , the singular point degenerates into a node.
Line 2 L separates the regions of the focus stability (I) and instability (II), therefore, according to the Andronov-Hopf theorem, in the vicinity of this line, a limit cycle must exist.It really does exist under the line 2 L and is unstable.The bifurcation diagram for the singular point 1 (0, 0, 1) T + is shown in Fig. 2a.
2) Point 2 (0, 0, 1) T -The matrix of the system linearized in the vicinity of this point is
Line 3 L separates the areas with opposite directions of the rotation of the trajectories around the point line correspond to nodes.Line 4 L separates the regions of foci stability and instability.As above, this is the line of the appearance/disappearance of the limit cycle.The limit cycles exist below this line.The bifurcation diagram for the point 2 (0, 0, 1) T -is shown in Fig. 2b.In the center of figure 3, the superposition of the phase diagrams for the points 1 T and 2 T is displayed.
Critical lines 1 2 3 4 , , , L L L L divide the ( , ) h j -plane into the regions of the total identical magnetization dynamics and the equivalent phase portraits.Note that, in the case of the opposite magnetization s of the reference layer, the common bifurcation diagram would be reflected from the vertical axis j .
Results
If the control parameters h and j belong to regions 1,2,3, the singular point 1 T will be an unstable focus, although the direction of the vector rotation in these regions will be different.At the same time, the singular point 2 T will be a stable focus at these parameters.Therefore, at the parameters taken from these regions, the stable switching from parallel to antiparallel configuration takes place, which, for the parameters from region 2, happens with the change of the rotation direction (see the corresponding hodographs in Fig. 3).It should be noted that, at the same values of the current density, the speed of switching is higher in region 1, than in 2 and 3.
For the parameters from region 4, the points 1 T and 2 T are the unstable foci with the opposite direction of the trajectory rotation.These points on the surface of the unit sphere are separated by the stable limit cycle.The trajectories started from 1 T and 2 T are attracted by this cycle.Let us note that the trajectories started from the point 1 T have the turning point, where they change the direction of rotation.Therefore, at the parameters belonging to region 4, switching is impossible.This is true also for region 5 -the difference with region 4 is only in the absence of the turning point and, therefore, the unidirectional rotation of the magnetization for all the trajectories (Fig. 3).For the values of the field and the current from region 6, the singular points 1 T and 2 T are the stable foci with the same direction of the trajectory rotation.They are separated by an unstable limit cycle (Fig. 3). in this region of the parameters, switching is impossible.
At the control parameters that belong to region 7, the point 1 T is a stable focus, whereas the point 2 T is an unstable one with the constant direction of the trajectory rotation (Fig. 3).Here the reverse switching of the magnetization is possible, i.e., from the anti-parallel to parallel configuration.This switching occurs mainly under the influence of the magnetic field.
P is the parameter of the spin polarization in the interface of the valve.Taking into account that, in this case, , for the valve with the perpendicular anisotropy, the effective magnetic field takes the following form: | 2,695 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Backreacting holographic superconductors from the coupling of a scalar field to the Einstein tensor
We investigate the properties of the backreacting holographic superconductors from the coupling of a scalar field to the Einstein tensor in the background of a d-dimensional AdS black hole. Imposing the Dirichlet boundary condition of the trial function without the Neumann boundary conditions, we improve the analytical Sturm-Liouville method with an iterative procedure to explore the pure effect of the Einstein tensor on the holographic superconductors and find that the Einstein tensor hinders the condensate of the scalar field but does not affect the critical phenomena. Our analytical findings are in very good agreement with the numerical results from the"marginally stable modes"method, which implies that the Sturm-Liouville method is still powerful to study the holographic superconductors from the coupling of a scalar field to the Einstein tensor even if we consider the backreactions.
I. INTRODUCTION
It is well known that the superconductivity is one of the most remarkable phenomena observed in physics in the 20th century [1]. However, the core mechanism of the high-temperature superconductor systems, which can not be described by the usual Bardeen-Cooper-Schrieffer (BCS) theory [2], is still one of the unsolved mysteries in theoretical physics so far. Interestingly, it was suggested that it is logical to investigate the properties of high temperature superconductors on the boundary of spacetime by considering the classical general relativity in one higher dimension with the help of the Anti-de Sitter/conformal field theories (AdS/CFT) correspondence [3][4][5]. In the probe limit, Gubser observed that the spontaneous U (1) symmetry breaking by bulk black holes can be used to construct gravitational dual of the transition from normal state to superconducting state [6], and Hartnoll et al. found that the properties of a (2 + 1)-dimensional superconductor can indeed be reproduced in the (3 + 1)-dimensional holographic dual model based on the framework of usual Maxwell electrodynamics [7]. Extending the investigation to the so-called holographic superconductor models away from the probe limit, i.e., taking the backreactions of the spacetime into account, the authors of Ref. [8] showed that even the uncharged scalar field can form a condensate in the (2 + 1)-dimensional holographic superconductor model. Along this line, there has been accumulated interest in studying the effects of the backreaction on the holographic s-wave p-wave [42][43][44][45][46][47][48][49] and d-wave [50] dual models. Reviews of the holographic superconductors can be found in Refs. [51][52][53][54].
Most of the aforementioned works on the gravitational dual models focus on the superconductors without an impurity. As a matter of fact, to study the effect of impurities is often important since their presence can drastically change the physical properties of the superconductors in condensed matter physics [55]. According to the AdS/CFT duality, Ishii and Sin investigated the impurity effect in a holographic superconductor by turning on a coupling between the gauge field and a new massive gauge field, and found that the mass gap in the optical conductivity disappears when the coupling is sufficiently large [56]. Zeng and Zhang studied the single normal impurity effect in a superconductor by using the holographic approach, which showed that the critical temperature of the host superconductor decreases as the size of the impurity increases and the phase transition at the critical impurity strength (or the critical temperature) is of zeroth order [57]. Fang et al. extended the study to the Fermionic phase transition induced by the effective impurity in holography and obtained a phase diagram in (α, T ) plane separating the Fermi liquid phase and the non-Fermi liquid phase [58]. More recently, Kuang and Papantonopoulos built a holographic superconductor with a scalar field coupled kinematically to the Einstein tensor and observed that, as the strength of the coupling increases, the critical temperature below which the scalar field condenses is lowering, the condensation gap decreases faster than the temperature, the width of the condensation gap is not proportional to the size of the condensate and at low temperatures the condensation gap tends to zero for the strong coupling [59]. Obviously, these effects suggest that the derivative coupling in the gravity bulk can have a dual interpretation on the boundary corresponding to impurities concentrations in a real material. Note that they concentrated on the probe limit where the backreaction of matter fields on the spacetime metric is neglected. Thus, in this work we will extend their interesting model to the case away from the probe limit and explore the effect of the Einstein tensor on the holographic superconductors with backreactions. In addition, we will compare the result in five dimensions with that in four dimensions and present an analysis of the effect the extra dimension has on the scalar condensation formation. In the calculation, we first use the Sturm-Liouville eigenvalue problem [60,61] to analytically study the holographic superconductor phase transition, and then count on the "marginally stable modes" method [6,62] to numerically confirm the analytical findings and verify the effectiveness of the Sturm-Liouville method.
The organization of the work is as follows. In Sec. II, we will introduce the backreacting holographic superconductor models from the coupling of a scalar field to the Einstein tensor in the d-dimensional AdS black hole background. In Sec. III we will give an analytical investigation of the holographic superconductors by using the Sturm-Liouville method. In Sec. IV we will give a numerical investigation of the holographic superconductors by using the "marginally stable modes" method. We will summarize our results in the last section.
II. DESCRIPTION OF THE HOLOGRAPHIC DUAL SYSTEM
The general action describing a charged, complex scalar field coupled to the Einstein tensor G µν in the is the Maxwell field strength, and ψ denotes the scalar field with the charge q and mass m. When the coupling parameter η → 0, our model reduces to the standard holographic superconductors with backreactions investigated in [8,9]. It should be noted that we can rescale the bulk fields ψ and A µ as ψ/q and A µ /q in order to put the factor 1/q 2 as the backreaction parameter for the matter fields. So the probe limit can be obtained safely if κ 2 /q 2 → 0. Without loss of generality, we can set q = 1 and keep κ 2 finite when we take the backreaction into account, just as in Refs. [9][10][11][12][13]46].
To go beyond the probe limit, we adopt the metric ansatz for the black hole with the curvature k = 0 as where f and χ are functions of r only, h ij dx i dx j represents the line element of a (d − 2)-dimensional hypersurface. Obviously, the Hawking temperature of this d-dimensional black hole, which will be interpreted as the temperature of the CFT, can be given by where the prime denotes a derivative with respect to r, and the black hole horizon r + is determined by f (r + ) = 0. For the considered ansatz (2), the nonzero components of the Einstein tensor G µν are For the scalar field and electromagnetic field, we will take ψ = |ψ|, A t = φ where ψ, φ are both real functions of r only. Thus, from the action (1) we can give the equations of motion for the metric functions f (r) and and for the matter fields φ(r) and ψ(r) where the prime denotes a derivative with respect to r.
We will count on the appropriate boundary conditions to get the solutions in the superconducting phase, ψ(r) = 0. At the horizon r = r + of the black hole, the regularity gives the boundary conditions At the asymptotic boundary r → ∞, the solutions behave like with the characteristic exponent According to the AdS/CFT correspondence, µ and ρ are interpreted as the chemical potential and charge density in the dual field theory, respectively. Considering the stability of the scalar field, we find that the mass should be above the Breitenlohner-Freedman (BF) bound m 2 [63], which depends on the coupling parameter η and dimensionality of the AdS space d. Note that, provided ∆ − is larger than the unitarity bound, both ψ − and ψ + can be normalizable and be used to define operators on the dual field theory, ψ − = O − , ψ + = O + , respectively [7,8]. In this work, we impose boundary condition ψ − = 0 since we concentrate on the condensate for the operator O + .
In the normal phase, ψ(r) = 0, the metric coefficient χ is a constant from Eq. (5) and the analytical solutions to Eqs. (6) and (7) lead to the Reissner-Nordström AdS black holes where the metric coefficient f goes back to the case of the Schwarzschild AdS black hole when κ = 0.
On the other hand, from the equations of motion (5)-(8) we can obtain the useful scaling symmetries and the transformation of the relevant quantities where α is a real positive number. For simplicity, we use the scaling symmetries to set L = 1 when performing calculations in the following sections.
III. CRITICAL BEHAVIOR FROM THE STURM-LIOUVILLE METHOD
We use the variational method for the Sturm-Liouville eigenvalue problem [60,61] to analytically investigate the properties of the backreacting holographic superconductors from the coupling of a scalar field to the Einstein tensor. We will derive the critical behavior of the system near the phase transition point and examine the effects of the Einstein tensor and backreaction on the holographic superconductors.
For convenience, we introduce a new variable z = r + /r and rewrite the equations of motion (5)-(8) into where the prime now denotes the derivative with respect to z. Note that the scalar field ψ = 0 at the critical temperature T c . Therefore the expectation value of the scalar operator O + is small near the critical point and we can select it as an expansion parameter ǫ ≡ O + with ǫ ≪ 1. Since we are interested in solutions where ψ is small, so from Eqs. (16) and (17) we can expand the scalar field ψ(z) and the gauge field φ(z) as [9,[13][14][15] and from Eqs. (14) and (15) the metric functions f (z) and χ(z) can be expanded around the Reissner-Nordström AdS spacetime as Considering that the chemical potential µ can be corrected order by order µ = µ 0 + ǫ 2 δµ 2 + · · · with δµ 2 > 0 [15], we get a result for the order parameter as a function of the chemical potential near the phase transition which indicates that the holographic s-wave superconducting phase transition with backreaction from the coupling of a scalar field to the Einstein tensor is of the second order and the critical exponent of the system always takes the mean-field value 1/2. The Einstein tensor, backreaction and spacetime dimension will not influence the result. When µ → µ 0 , the phase transition occurs and the order parameter is zero at the critical point, which means that the critical value µ is µ c = µ 0 . Now we are in a position to solve equations order by order. At the zeroth order, the equation of motion for the Maxwell field (16) reduces to which has a solution φ 0 where r +c is the radius of the horizon at the critical point, we will have where we have set a dimensionless quantity λ = ρ/r d−2 +c . Inserting this solution into Eq. (15), we can give the equation of motion for the metric function f 0 (z), i.e., with its solution where we have defined a new function ξ(z) for simplicity.
At the first order, the equation of motion for ψ 1 (z) is which has the asymptotic AdS boundary condition Just as in the interesting works by Kolyvaris et al. [64,65], we will use Eq. (25) to discuss the stability of our solutions. We can express the effective potential of ψ 1 as with the defined functions which can develop a negative gap near the black hole horizon, implying a potential instability of the black hole.
In Fig. 1, we plot the curves of the effective potential V ef f,0 (z) with different values of the coupling parameter η for the fixed mass of the scalar field m 2 = −3 (top-left) and m 2 = 0 (top-right), backreaction parameter κ = 0 and dimensionless quantity λ = 10 in d = 5 dimensions. As a matter of fact, the other choices will not qualitatively modify our results. From this figure, we can see the potential well forming in all cases, which can trap the scalar particles. For the nonzero mass of the scalar field, we observe that the potential well becomes wider and deeper as the coupling parameter η decreases, which indicates that the increase of the coupling parameter will hinder the condensate of the scalar field. For the case of m 2 = 0, we find that the curves of the effective potential V ef f,0 (z) coincide, i.e., the Einstein tensor will not influence the effective potential, which implies that the critical temperature is independent of the Einstein tensor in this case. As we will show, the behaviors of the effective potential are consistent with the effects of the Einstein tensor on the condensate of the scalar field. Considering that the effective scalar mass can give a better shape in the potential as it was shown in [65], we also analyze the behaviors of the effective mass m ef f,0 (z) in our holographic system which reduces to the standard effective mass given in Ref. [6] when η → 0. In Fig. 1 Near the asymptotic boundary z = 0, we assume that ψ 1 takes the form [60] where the trial function F (z) obeys the boundary condition F (0) = 1. Substituting Eq. (29) into Eq. (25), with In order to use the Sturm-Lioville method [60], we will adopt an iteration method and express the backreaction parameter κ as κ n = n∆κ with n = 0, 1, 2, ···, where ∆κ = κ n+1 −κ n is the step size of our iterative procedure.
Setting κ −1 = 0 and λ 2 | κ−1 = 0, we find that κ 2 λ 2 = κ 2 n λ 2 = κ 2 n (λ 2 | κn−1 ) + 0[(∆κ) 4 ], where λ 2 | κn−1 is the value of λ 2 for κ n−1 . Hence we can express the function ξ(z) as Defining a new function we can rewrite Eq. (30) as According to the Sturm-Liouville eigenvalue problem [66], we can deduce the eigenvalue λ minimizes the expression with Using Eq. (35) to calculate the minimum eigenvalue of λ 2 for i = + or i = −, we can obtain the critical temperature T c for different coupling parameter η, strength of the backreaction κ and mass of the scalar field m from the following relation For clarity, we focus on the condensate for the operator O + , just as mentioned in the previous section. As a matter of fact, another choice for the operator O − will not qualitatively modify our results.
Before going further, we would like to make a comment. In order to get the expression (35), we have used the following boundary condition Obviously, the condition T (1)F (1)F ′ (1) = 0 can be satisfied easily since we have T (1) ≡ 0 from Eq. (33).
On the other hand, we observe that the leading order of z in T , which means that the condition T (0)F (0)F ′ (0) = 0 will be satisfied automatically. Thus, we will just require F (z) to satisfy the Dirichlet boundary condition F (0) = 1 rather than imposing the Neumann boundary condition F ′ (0) = 0, just as discussed in [67]. In the following calculation, we will assume the trial function F (z) to be where a is a constant. We find that it will give a better estimate of the minimum of (35), which shows that the analytical results are much more closer to the numerical findings.
As an example, we will calculate the case for η = 0, d = 5 and m 2 = −3 with the chosen values of the backreaction parameter κ for the operator O + , i.e., i = +, and compare with the analytical results in Ref. [9]. Setting the step size ∆κ = 0.05, for κ 0 = 0 we arrive at whose minimum is λ 2 | κ1 = 17.4250 at a = 0.798627. So the critical temperature reads T c = 0.194805ρ 1/3 .
Comparing with the analytical result T c = 0.193442ρ 1/3 from Ref. [9], we find that this value is much closer to the numerical result T c = 0.195293ρ 1/3 . For κ 2 = 0.10, substituting λ 2 | κ1 into (32) and (33) we get λ 2 = 2.28900 × 10 6 (0.728219 − 1.24146a + 0.601767a 2 ) 6.63481 × 10 4 − 8.77537a + 3.11095 × 10 4 a 2 , whose minimum is λ 2 | κ2 = 17.1312 at a = 0.792406. Therefore the critical temperature is T c = 0.186737ρ 1/3 , which is much closer to the numerical finding T c = 0.187414ρ 1/3 , compared with the analytical result T c = 0.185189ρ 1/3 in [9]. For other values of η, κ, d and m 2 , the similar iterative procedure also can be applied to present the analytical result for the critical temperature. Tables I and II, respectively. In the calculation, we fix the step size by ∆κ = 0.05. Moreover, to see the dependence of the analytical results on the Einstein tensor more directly, in Fig. 2 we also exhibit the critical temperature T c obtained by the analytical method as a function of the coupling parameter η for the fixed backreaction parameters and masses of the scalar field in d = 5 (left) and 4 (right) dimensions. For the fixed backreaction parameter, it is clear that with the increase of the coupling parameter η, the critical temperature T c decreases, which supports the observation obtained in Ref. [59] and indicates that the Einstein tensor will hinder the condensate of the scalar field. Obviously, the effect of the Einstein tensor on the condensate of the scalar field is consistent with the behavior of the effective potential shown in Fig. 1. On the other hand, imposing the Dirichlet boundary condition of the trial function F (z) without the Neumann boundary conditions, we observe that the improved Sturm-Liouville method can indeed give a better estimate of the critical temperature, compared with the analytical result from the trial function F (z) = 1 − az 2 in Ref. [9]. 0.085581ρ 1/2 0.086667ρ 1/2 0.078022ρ 1/2 0.079368ρ 1/2 0.057473ρ 1/2 0.060125ρ 1/2 η = 0 0.085581ρ 1/2 0.086667ρ 1/2 0.078015ρ 1/2 0.079360ρ 1/2 0.057433ρ 1/2 0.060080ρ 1/2 η = 0.01 0.085581ρ 1/2 0.086667ρ 1/2 0.078009ρ 1/2 0.079353ρ 1/2 0.057394ρ 1/2 0.060037ρ 1/2 η = 0.10 0.085581ρ 1/2 0.086667ρ 1/2 0.077963ρ 1/2 0.079303ρ 1/2 0.057125ρ 1/2 0.059734ρ 1/2 η = 0.50 0.085581ρ 1/2 0.086667ρ 1/2 0.077886ρ 1/2 0.079211ρ 1/2 0.056619ρ 1/2 0.059156ρ 1/2 η = 1.00 0.085581ρ 1/2 0.086667ρ 1/2 0.077839ρ 1/2 0.079173ρ 1/2 0.056408ρ 1/2 0.058911ρ 1/2 Considering the fact that the effect of the Einstein tensor is intertwined with that of the mass of the scalar field in the expression (11), we will set m 2 = 0 to get the pure effect of the Einstein tensor on the critical temperature. In Tables III, IV Tables III, IV and the red lines in Fig. 3 we find that T c is independent of the coupling parameter η, which implies that the Einstein tensor will not affect the condensate of the scalar field for the case of m 2 = 0. Again, in this case the effect of the Einstein tensor on the condensate of the scalar field agrees well with the behavior of the effective potential shown in Fig. 1. Thus, the probe approximation loses some important information and we have to count on the backreaction to explore the real impact of the Einstein tensor on the holographic superconductors in this case. Moreover, in the case of m 2 = 0, we observe that the critical temperature T c increases as the spacetime dimension d increases for the fixed coupling parameter η and backreaction parameter κ, which supports the findings in Ref. [69] and means that the increase of the dimensionality of the AdS space makes it easier for the scalar hair to be formed.
On the other hand, from Tables I-IV and Figs. 2-3, we point out that the critical temperature T c decreases as the backreaction parameter κ increases for the fixed coupling parameter η, scalar field mass m 2 and spacetime dimension d, which shows that the stronger backreaction can make the scalar hair more difficult to be developed. This can be used to back up the findings in Refs. [9][10][11][12][13]46].
It is well known that the marginally stable modes correspond to ω = 0 which indicates that the phase transition or the critical phenomena may occur [6]. Thus, we will solve the equation of motion (47) numerically by doing integration from the horizon out to the infinity in the case of ω = 0 with the boundary conditions of R(z) at the event horizon and at the asymptotic AdS boundary Since we concentrate on the condensate for the operator O + in this work, we impose boundary condition R − = 0. In the following calculations, we will scan the parameter space of holographic superconductors and find the certain values of λ = ρ/r d−2 +c which satisfy the boundary condition for the given η, κ, m 2 and d. Note that the quantity of R(1) is very close to zero near the critical point of the phase transition, we set the initial condition R(1) = 0.001 without loss of generality. In Fig. 4, we plot the marginally stable curves of scalar fields R(z) corresponding to the critical values λ n with m 2 = −3 for different coupling parameters η and backreaction parameters κ in the case of 5-dimensional AdS black hole background by solving Eq. (47) numerically. In each panel, three curves correspond to the first three lowest-lying critical values λ n which are λ 0 < λ 1 < λ 2 in the sequence, where the index n denotes the "overtone number" [62]. The red line has no intersecting points with the R(z) = 0 axis at nonvanishing z and is dual to the minimal value of λ n (a mode of node n = 0), which will be the first to condense. The blue line (a mode of node n = 1) has one intersecting point with R(z) = 0 axis while the green line (a mode of node n = 2) has two, which do not matter to the phase transition because the blue and green lines are expected to be unstable with radial oscillations in z-direction of R(z) costing energy [70]. At the critical point, inserting λ 0 into the Hawking temperature of the d-dimensional Reissner-Nordström black hole (12), i.e., In Tables I-IV, we present the critical temperatures obtained numerically by using the shooting method for the 5-dimensional and 4-dimensional black hole backgrounds, respectively. Compared with the analytical results in each table, the agreement of the numerical calculation (right column) and analytical result derived from the Sturm-Liouville method (left column) is impressive, which implies that the Sturm-Liouville method is still powerful to study the holographic superconductors from the coupling of a scalar field to the Einstein tensor even if we consider the backreactions. Obviously, the "marginally stable modes" method is a very effective way to study the critical behavior of the phase transition in the holographic superconductor models.
In addition to giving us the numerical results of the critical temperature, the marginally stable modes can reveal the instabilities of the background which means that the AdS black hole will become unstable to develop charged scalar hairs in the AdS black hole background.
V. CONCLUSIONS
We have investigated the properties of the backreacting holographic superconductors from the coupling of a scalar field to the Einstein tensor in the background of a d-dimensional AdS black hole, which provides a more explicit and complete understanding of the effect of the Einstein tensor on the holographic superconductors. Imposing the Dirichlet boundary condition of the trial function F (z) without the Neumann boundary conditions, we improved the analytical Sturm-Liouville method with an iterative procedure to calculate the critical temperatures for the scalar operator O + and found that the analytical findings obtained in this way are in very good agreement with the numerical results from the "marginally stable modes" method, which implies that the Sturm-Liouville method is still powerful to study the holographic superconductors from the coupling of a scalar field to the Einstein tensor even if we consider the backreactions. It is shown that, when the backreaction parameter is nonzero, the critical temperature decreases with the increase of the coupling parameter of the Einstein tensor, which can be used to back up the observation obtained in Ref. [59] that the Einstein tensor will hinder the condensate of the scalar field. However, when the backreaction parameter and scalar mass are zero, the critical temperature is independent of the Einstein tensor, which implies that the probe approximation still loses some important information and we have to count on the backreaction to explore the real and pure impact of the Einstein tensor on the holographic superconductors in this case.
In addition, we observed that the critical temperature increases as the spacetime dimension increases for the fixed scalar mass, coupling parameter and backreaction parameter, which means that the scalar hair can be formed easier in the higher-dimensional background. Moreover, we interestingly noted that the Einstein tensor, backreaction and spacetime dimension cannot modify the critical phenomena, i.e., this holographic superconductor phase transition belongs to the second order and the critical exponent of the system always takes the mean-field value. | 6,092.8 | 2018-10-01T00:00:00.000 | [
"Physics"
] |
Equilibrium fluctuations for the disordered harmonic chain perturbed by an energy conserving noise
We investigate the macroscopic behavior of the disordered harmonic chain of oscillators, through energy diffusion. The Hamiltonian dynamics of the system is perturbed by a degenerate conservative noise. After rescaling space and time diffusively, we prove that energy fluctuations in equilibrium evolve according to a linear heat equation. The diffusion coefficient is obtained from the non-gradient Varadhan's approach, and is equivalently defined through the Green-Kubo formula. Since the perturbation is very degenerate and the symmetric part of the generator does not have a spectral gap, the standard non-gradient method is reviewed under new perspectives.
Introduction
In this paper we investigate diffusion problems in non homogeneous media for interacting particle systems. More precisely, we adress the problem of energy fluctuations for chains of oscillators with random defects. In the last fifty years, it has been recognized that introducing randomness in interacting particle systems has a drastic effect on the conduction properties of the material. Mathematically the only tractable model is the one dimensional system with harmonic interactions [1]. The aim of this paper is to study the diffusive behavior of disordered harmonic chains perturbed by an energy conserving noise. In some sense, the noise should simulate the non linearities effect, and the conductivity of the onedimensional chain should become finite and positive. We also expect that some homogenization effect occurs and that the conductivity does not depend on the statistics of the disorder in the thermodynamic limit.
The disorder effect has already been investigated for lattice gas dynamics, for example in [7,8,13,15]. These papers share one main feature: the models are non gradient due to the presence of the environment. Non gradient systems are usually solved by establishing a microscopic Fourier's laẘ UMPA, UMR-CNRS 5669, ENS de Lyon, 46 allée d'Italie, 69007<EMAIL_ADDRESS>up to a small fluctuating term, following the sophisticated method initially developed by Varadhan in [19], and generalized to non-reversible dynamics [10]. The previous works mainly deal with symmetric systems of particles that evolve according to an exclusion process in random environment: the particles are attempting jumps to nearest neighbor sites at rates which depend on both their position and the objective site, and the rates themselves come from a quenched random field. Different approaches are adopted to tackle this non gradient system: whereas the standard Varadhan's method is helpful only in dimension d ě 3 [7], the "long jump" variation developed by Quastel in [15] is valid in all dimensions.
The study of disordered chains of oscillators perturbed by a conservative noise has appeared more recently, see by instance [2,3,5]. In these papers, only the behavior of the thermal conductivity defined by the Green-Kubo formula is investigated. Here, the diffusion coefficient is defined through hydrodynamics.
In [18], we have obtained the diffusive scaling limit for a homogeneous chain of coupled harmonic oscillators perturbed by a noise, which randomly flips the sign of the velocities, so that the energy is conserved but not the momentum. We have derived a system of nonlinear hydrodynamic equations on the only two conserved quantities: the energy and the total length of the chain, thanks to the relative entropy method. One of the major ingredient is an exact fluctuation-dissipation equation (see for example [12]), which reproduces at the microscopic level the Fourier law up to a small fluctuating term.
Our first motivation was to investigate the same chain of harmonic oscillators, still perturbed by the velocity-flip noise, but now provided with i.i.d. random masses. This makes all previous computations pointless: in particular, the fluctuation-dissipation equations are not directly computable any more. As a consequence, the fluctuation-dissipation decomposition can only be approximated by a sequence of local functions, in the sense that the difference has a small space-time variance with respect to the dynamics in equilibrium. The main ingredients of the usual non gradient method are: first, a spectral gap for the symmetric part of the dynamics, and second, a sector condition for the total generator. The model has then special features that enforce the Varadhan's method to be considered with new perspectives. In particular, the symmetric part of the generator is poorly ergodic, and does not have a spectral gap when restricted to microcanonical manifolds. Moreover, due to the degeneracy of the noise, the asymmetric part of the generator is difficult to control by its symmetric part (in technical terms, the sector condition does not hold), with the only velocity-flip noise. Besides, let us remark that the energy current depends on the disorder, and has to be approximated by a fluctuation-dissipation equation which takes into account the fluctuations of the disorder itself.
Because of the high degeneracy of the velocity-flip noise, we add a second stochastic perturbation, that exchanges velocities (divided by the square root of mass) and positions at random independent Poissonian times, so that a kind of sector condition can be proved (see Proposition 5.7: we call it the weak sector condition). However, the spectral gap estimate and the usual sector condition still do not hold when adding the exchange noise. The harmonic chain has helpful features, in particular the generator of the dynamics preserves the degree of polynomials, and even a degenerate noise is sufficient to apply Varadhan's approach. The sector condition and the non gradient decomposition are only needed for a specific class of functions. The stochastic noise still does not have a spectral gap, but it does make no harm. Contrary to the standard Varadhan's approach, we do not need to prove any general result concerning the so-called closed forms (see [17] by instance). As far as we know, this is the first time that the non-gradient method is used successfully without the spectral gap estimate nor the usual sector condition.
Here, we study equilibrium macroscopic energy fluctuations. By instance, for the nonlinear ordered for some finite constant C ą 0. The equations of motions are given by The dynamics conserves the total energy To overcome the lack of ergodicity of deterministic chains, we add a stochastic perturbation to this new dynamics, so that the convergence of the energy fluctuations distribution holds (Theorem 3.1). The noise can be easily described: at independently distributed random Poissonian times, the quantity p x { ? M x and the interdistance r x are exchanged, or the momentum p x is flipped into´p x . This noise still conserves the total energy E, and is very degenerate.
Even if Theorem 3.1 could be proved for this harmonic chain, for pedagogical reasons we now focus on a simplified model (as in [4]), which has exactly the same features and involves less painful computations. From now on, we study the dynamics on the new configurations tη x u xP written as where m :" tm x u xP is the new disorder with the same characteristics as before. It is notationally convenient to change the variable η x into ω x :" ? m x η x , and the total energy reads Let us now introduce the corresponding stochastic energy conserving dynamics: the evolution is described by (1) between random exponential times, and at each ring one of the following interactions can happen: (i) Exchange noise: two nearest neighbors variables ω x and ω x`1 are exchanged, With these two perturbations, the dynamics conserves the total energy only: The other important conservation laws of the Hamiltonian part are destroyed by the stochastic noises. As a result, the following family tµ β u βą0 of grand-canonical Gibbs measures is invariant for the process: The index β stands for the inverse of the temperature. Notice that µ β does not depend on the disorder, and that the dynamics is not reversible with respect to the measure µ β . We define e β as the thermodynamical energy associated to β, namely the expectaction of ω 2 0 with respect to µ β , and χpβq " 2β´2 as the variance of ω 2 0 with respect to µ β . We consider the system starting with µ β and we denote by β the expectation for the stochastic dynamics starting with this invariant distribution. We prove a diffusive behavior for the energy: first, define the distributions-valued energy fluctuation field It is well-known that Y N converges in distribution as N ÝÑ 8 towards a centered Gaussian field Y, which satisfies for good test functions F, G. In this paper we prove that these energy fluctuations evolve diffusively in time (Theorem 3.1). More precisely, the following distribution converges in law as N ÝÑ 8 to the solution of the linear Stochastic Partial Differential Equation (SPDE) where D is the diffusion coefficient which has explicit expressions, and B is the standard normalized space-time white noise.
Let us now give the plan of the paper. We start with Section 2, which is devoted to introduce the model and all notations and definitions that are needed. The main point is to identify the diffusion term D (Section 5), by adapting the method introduced in [19]. In Section 4, we derive the Boltzmann-Gibbs principle. The convergence of the energy fluctuations field (in the sense of finite dimensional distributions) is proved in Section 3. Finally, Section 6 gives a precise description of the diffusion coefficient through several variational formulas. In Section 7, we present a second disordered model, where the interaction is described by a potential V. For this anharmonic chain, we need a very strong stochastic perturbation, which has a spectral gap, and satisfies the sector condition. In Appendices, technical points are detailed: in Appendix A, the space of square integrable functions w.r.t. the standard Gaussian law is studied through its orthonormal basis of Hermite polynomials. In Appendix B, the weak version of closed forms usual result is investigated. The sector condition is proved for a specific class of functions in Appendix C. Appendix D is devoted to prove the convergence of the Green-Kubo formula.
In Appendix E, the tightness for the energy fluctuation field is investigated.
Generator of the Markov process
We first describe the dynamics on the finite torus N :" t0, ..., Nu, meaning that boundary conditions are periodic. The configuration tω x u xP N evolves according to a dynamics which can be divided into two parts, a deterministic one and a stochastic one. The space of configurations of our system is given by Ω N " N . We recall that the disorder is an i.i.d. sequence m " tm x u xP which satisfies: for some finite constant C ą 0. The corresponding product and translation invariant measure on the space Ω D " rC´1, Cs is denoted by and its expectation is denoted by . For a fixed disorder field m " tm x u xP N , we consider the system of ODE's ?
? m x´1¸d t, t ě 0, x P N and we superpose to this deterministic dynamics a stochastic perturbation described as follows: to each atom x P N , and each bond tx, x`1u, x P N is associated an exponential clock of rate one, such that each clock is independent of each other. When the clock attached to x rings, ω x is flipped into´ω x , and when the clock attached to the bond tx, x`1u rings, the values ω x and ω x`1 are exchanged. This dynamics can be entirely defined by the generator of the Markov process tω x ptq ; x P N u tě0 , that is where, for all functions f : Here, the configuration ω x is the configuration obtained from ω by flipping the momentum of particle x: The configuration ω x,x`1 is obtained from ω by exchanging the momenta of particles x and x`1: We denote the total generator of the noise by S N :" γS flip N`λ S exch N , where γ, λ ą 0 are two positive parameters which regulate the respective strengths of noises.
One quantity is conserved: the total energy ř ω 2 x . The following translation invariant product Gibbs measures µ N β on Ω N are invariant for the process: In the following, the expectation of f with respect to µ N β is denoted by x f y β . The index β stands for the inverse temperature, namely xω 2 0 y β " 1{β. Let us highlight the fact that the Gibbs measures do not depend on the disorder m. This obvious remark will play further a crucial role. From the definition, our model is not reversible with respect to the measure µ N β . Precisely, A m N is an antisymmetric operator in L 2 pµ N β q, whereas S N is symmetric. We denote by Ω the space of configurations in the infinite line, that is Ω :" , and by µ β the product Gibbs measure on . Hereafter, for every β ą 0, we denote by ‹ β the probability measure on Ω DˆΩ defined by ‹ β :" b µ β . We notice that ‹ β is translation invariant and we write ‹ β for the corresponding expectation.
Energy current
Since the dynamics conserves the total energy, there exist instantaneous currents of energy j x,x`1 such that L m pω 2 x q " j x,x`1 pm, ωq´j x´1,x pm, ωq. The quantity j x,x`1 is the amount of energy between the particles x and x`1, and is equal to The energy conservation law can be read locally as where J x,x`1 ptq is the total energy current between x and x`1 up to time t. This can be written as where M x,x`1 ptq is a martingale which can be explicitely computed as Itô stochastic integral: where pN x,x`1 q xP are independent Poisson processes of intensity λ. We also write j x,x`1 " j A x,x`1j S x,x`1 where j A x,x`1 (resp. j S x,x`1 ) is the current associated to the antisymmetric (resp. symmetric) part of the generator: One can check that the current cannot be directly written as the gradient of a local function, neither by an exact fluctuation-dissipation equation (in other words, the current is not the sum of a gradient and a dissipative term of the form L m N pτ x hq, where h is a local function of the system configuration). This means that we are in the nongradient case. We also define the static compressibility that is equal to χpβq :" xω 4 0 y β´x ω 2 0 y 2 β " 2 β 2 .
Cylinder functions
For every x P and f a measurable function on Ω DˆΩ , we consider the translated function τ x f , which is the function on Ω DˆΩ defined by: τ x f pm, ωq :" f pτ x m, τ x ωq, where τ x m and τ x ω are the disorder and particle configurations translated by x P , respectively: If f is a measurable function on Ω DˆΩ , the support of f , denoted by Λ f , is the smallest subset of such that f pm, ωq depends only on tm x , ω x ; x P Λ f u and f is called a cylinder function if Λ f is finite.
For every cylinder function f : Ω DˆΩ ÝÑ , consider the formal sum which does not make sense but for which ∇ 0 pΓ f q :" Γ f pm, ω 0 q´Γ f pm, ωq, are well defined. Similarly, we define p∇ x f qpm, ωq :" f pm, ω x q´f pm, ωq, Let Λ Ť be a finite subset of , and denote by F Λ the σ-algebra generated by tm x , ω x ; x P Λu. For a fixed positive integer ℓ, we define Λ ℓ :" t´ℓ, ..., ℓu. If the box is centered at site x P , we denote it by Λ ℓ pxq :" t´ℓ`x, ..., ℓ`xu.
We denote by C the set of cylinder functions on Ω DˆΩ with compact support and null average with respect to µ β . We also introduce the set of quadratic cylinder functions on Ω DˆΩ , denoted by Q Ă C, and defined as follows: ϕ P Q if there exists a finite sequence pψ i, j pmqq i, jP of real cylinder functions on Ω D such that ϕpm, ωq " For ϕ P C, denote by s ϕ the smallest positive integer s such that Λ s contains the support of ϕ and then Λ ϕ " Λ s ϕ . Hereafter, we consider operators L m , A m and S acting on functions f P C as We also denote S x " γ∇ x`λ ∇ x,x`1 for x P . For Λ ℓ Ť defined as above, we denote by L m Λ ℓ , resp. S Λ ℓ , the restriction of the generator L m , resp. S, to the finite box Λ ℓ , assuming periodic boundary conditions. DEFINITION 2.1. Let C 0 (respectively Q 0 ) be the set of cylinder (respectively quadratic cylinder) functions ϕ on Ω DˆΩ such that there exists a finite subset Λ Ť , and cylinder functions tF x , G x u xPΛ , satisfying If ϕ belongs to Q 0 , we assume the cylinder functions F x , G x to be quadratic.
In the following, we will mostly deal with Q 0 . To conclude this section we introduce the quadratic form of the generator: for any x P and cylinder functions f , g P C, let us define and recall that The symmetric form D ℓ is called the Dirichlet form, and is well-defined on C. This is a random variable with respect to the disorder m.
Semi-inner products and diffusion coefficient
For cylinder functions g, h P C, let ! g, h " β,‹ :" ÿ xP ‹ β rg τ x hs and ! g " β,‹‹ :" which are well defined because g and h belong to C and therefore all but a finite number of terms vanish. Notice that !¨,¨" β,‹ is an inner-product, since the following equality holds: Since ! f´τ x f , g " β,‹ " 0 for all x P , this scalar product is only semidefinite. In the next proposition we give explicit formulas for elements of C 0 .
Proof. The proof is straightforward. l DEFINITION 2.2. We define the diffusion coefficient Dpβq for β ą 0 as ) .
The first term in the sum is only due to the exchange noise, whereas the second one comes from the Hamiltonian part of the dynamics. Formally, this formula could be read as but the last term is ill-defined because j A 0,1 is not in the range of L m . More rigorously, we should define The last expression is now well-defined, and the problem is reduced to prove convergence as z ÝÑ 0. From Hille-Yosida Theorem (see Proposition 2.1 in [6] by instance) (5) is equal to the infinite volume Green-Kubo formula: In Section 6.2, we prove that (6) converges, by inspiring the argument from [2]. It follows that the diffusion coefficient can be equivalently defined in the two ways. Thanks to the Green-Kubo formula, one can easily see that Dpβq does not depend on β. We denote by Lpzq the second term of the right-hand side of (6), that is The function L is smooth on p0,`8q (see [16]). Let h z :" h z pm, ω; βq be the solution of the resolvent equation in L 2 p!¨,¨" β,‹ q: pz´L m qh z " j A 0,1 .
Then we have Observe that if ω is distributed according to µ β then β 1{2 ω is distributed according to µ 1 . Since h z pm, ω; 1q " h z pm, ω; βq and j A x,x`1 is an homogeneous function of degree two in ω, it follows that the diffusion coefficient does not depend on β.
Macroscopic fluctuations of energy
In this section we are interested in the fluctuations of the empirical energy. We prove that the limit fluctuation process is governed by a generalized Ornstein-Uhlenbeck process, whose covariances are given in terms of the diffusion coefficient. We adapt the non-gradient method introduced by Varadhan. In particular, we establish rigorously the variational formula that appears in the definition of the diffusion coefficient (Definition 2.2). Varadhan's approach is investigated in Sections 4, 5 and 6.
Energy fluctuation field
Recall that we denote by e β the thermodynamical energy associated to the inverse temperature β ą 0, namely e β " β´1. We define the energy empirical distribution π N t,m on the torus " r0, 1q as where δ u states for the Dirac measure. We denote by tωptqu tě0 the Markov process generated by N 2 L m N and by M 1 the set of probability measures on , endowed with the weak topology. The space of trajectories in M 1 , which are right-continuous and left-limited (i.e. the Skorokhod space) is denoted by D`r0, Ts, M 1˘. If the initial state of the dynamics is given by the equilibrium Gibbs measure µ N β , then π N t,m weakly converges towards the deterministic measure on , equal to te β duu. Our goal is to investigate the fluctuations of the empirical measure π N with respect to this limit. Let us fix the disorder m, and the inverse of temperature β ą 0. Consider the system under the equilibrium measure µ N β .
DEFINITION 3.1 (Empirical energy fluctuations).
We denote by Y N t,m the empirical energy fluctuation field defined as where H : ÝÑ is a smooth function.
We are going to prove that the distribution Y N t,m converges in law towards the solution of the linear SPDE: is a standard normalized space-time white noise, and D is the diffusion coefficient defined in Theorem 5.9. Observe that there is no dependence on the disorder m in the limit process. In other words, the latter is described by the stationary generalized Ornstein-Uhlenbeck process with zero mean and covariances given by for all t ě 0 and smooth functions H, G : ÝÑ . Here, H (resp. G) is the periodic extension to the real line of H (resp. G). We denote by Y N m the probability measure on Dpr0, Ts, M 1 q induced by the energy fluctuation field Y N t,m and the Markov process tωptqu tě0 generated by N 2 L m N , starting from the equilibrium probability measure µ N β . Let Y be the probability measure on the space Dpr0, Ts, M 1 q corresponding to the generalized Ornstein-Uhlenbeck process Y t . The main result of this section is the following.
Strategy of the proof
We follow the lines of Section 3 in [14]. The proof of Theorem 3.1 is divided into three steps. First, we need to show that the sequence tY N m u Ně1 is tight. This point follows a standard argument, given for instance in Section 11 of [9], and recalled in Appendix E for the sake of completeness. Then, we prove that the one-time marginal of any limit point Y ‹ of a convergent subsequence of tY N m u Ně1 is the law of a centered Gaussian field Y with covariances given by where H, G : ÝÑ are smooth functions. This statement comes from the central limit theorem for independent variables. Finally, we prove the main point in the next subsections: all limit points Y ‹ of convergent subsequences of tY N m u Ně1 solve the martingale problems below.
Martingale decompositions
Let us fix a smooth function H : ÝÑ . We rewrite Y N t,m pHq as Hereafter, ∇ N denotes the discrete gradient: Hˆx`1 N˙´Hˆx N˙ , and the discrete Laplacian ∆ N is defined in a similar way: To close the equation, we are going to replace the term involving the microscopic currents with a term involving Y N t,m . In other words, the most important part in the fluctuation field represented by is its projection over the conservation field Y N t,m (recall that the total energy is the unique conserved quantity of the system). The non-gradient approach consists in using the fluctuation-dissipation approximation of the current j x,x`1 given by Theorem 5.9 as D`ω 2 Here, pN x,x`1 q xP , pN x q xP are independent Poisson processes of intensity (respectively) λ and γ. The strategy of the proof is based on the two following results.
LEMMA 3.2. For every smooth function H : ÝÑ
, and every function f P Q,
THEOREM 3.3 (Boltzmann-Gibbs principle).
There exists a sequence of functions t f k u kP P Q such that (i) for every smooth function H : ÝÑ , (ii) and moreover As a result, the martingale M We have proved that the limit solves the martingale problems (7) and (8), which uniquely characterized the generalized Ornstein-Uhlenbeck process Y t . The proof of Lemma 3.2 is postponed to the end of this section. The proof of Theorem 3.3 is more challenging, and Sections 4, 5 and 6 are devoted to it.
Proof of Lemma 3.2
In this paragraph we give a proof of Lemma 3.2. We define As before, we can rewrite , .
-´∇ Nˆx N˙Γ f‚ pm, sqd Therefore, On the one hand, This last quantity is of order 1{N 2 , because f is a local function of zero average, and H is smooth. On the other hand, let us define Then, the expectation of the second term of (12) is equal to Again, since f is local and H is smooth, this quantity is of order 1{N 2 . Indeed, in the expression ∇ x,x`1 pY x q, there is a sum over z P , but in which only terms with |z´x| ď 2 remain. The same holds for the third term of (12).
Central limit theorem variances at equilibrium
In this section we are going to identify the diffusion coefficient D that appears in (9). Roughly speaking, D can be viewed as the asymptotic component of the energy current j x,x`1 in the direction of the gradient ω 2 x`1´ω 2 x , and makes the expression below vanish: Here, the infimum is taken over all smooth local functions f . Let us try to give in the following subsection an intuition for the origin of Theorem 4.5.
An insight through additive functionals of Markov processes
Consider a continuous time Markov process tY s u sě0 on a complete and separable metric space E, which has an invariant and ergodic measure π. We denote by x¨y π the inner product in L 2 pπq and by L the infinitesimal generator of the process. The adjoint of L in L 2 pπq is denoted by L ‹ . Fix a function V : E ÝÑ in L 2 pπq such that xVy π " 0. Theorem 2.7 in [11] gives conditions on V which guarantee a central limit theorem for VpY s qds and shows that the limiting variance equals Let the generator L be decomposed as L " S`A, where S " pL`L ‹ q{2 and A " pL´L ‹ q{2 denote, respectively, the symmetric and antisymmetric parts of L. Let H 1 be the completion of L 2 pπq with respect to the semi-norm }¨} 1 defined as: Let H´1 be the dual space of H 1 with respect to L 2 pπq, in other words, the Hilbert space generated by local functions and the norm }¨}´1 defined by where the supremum is carried over all local functions g. Formally, } f }´1 can also be thought as Notice the difference with the variance σ 2 pV, πq which formally reads Hereafter, B s represents the symmetric part of the operator B. We can write, at least formally, that where A ‹ stands for the adjoint of A. We have therefore that " p´Lq´1 ‰ s ď p´Sq´1. The following result is a rigorous estimate of the time variance in terms of the H´1 norm, which is proved in [11], Lemma 2.4.
LEMMA 4.1. Given T ą 0 and a mean zero function
If we compare the previous left-hand side to the Boltzmann-Gibbs principle (11), the next step should be to take V as and then take the limit as N goes to 8. In the right-hand side of (13) we will obtain a variance that depends on N, and the main task will be to show that this variance converges: this is studied in more details in what follows. Precisely, we prove that the limit of the variance results in a semi-norm, which is denoted by~¨~β and defined in (20). We are going to see that (20) involves a variational formula, which formally reads~f~2 The final step consists in minimizing this semi-norm on a well-chosen subspace in order to get the Boltzmann-Gibbs principle, through orthogonal projections in Hilbert spaces. The hard point is that¨~β only depends on the symmetric part of the generator S, and the latter is really degenerate, since it does not have a spectral gap.
In Subsection 4.2, we investigate the variance x f , p´Sq´1 f y β , and prove its well-posedness for every function f in C 0 . In Subsection 4.3, we relate the previous limiting variance (taking the limit as N goes to infinity) to the suitable semi-norm. Subsection 4.4 is devoted to prove the Boltzmann-Gibbs principle inspired from Lemma 4.1. Then, in Section 5 we investigate the Hilbert space generated by the semi-norm, and prove some decompositions into direct sums. Finally, Section 6 focuses on the diffusion coefficient and its different expressions.
Microcanonical measures and integration by part 4.2.1 Decomposition on microcanonical measures
The thermodynamic ensemble which is naturally associated with a Hamiltonian dynamics is the microcanonical ensemble, which describes the system at fixed energy. It is possible to devise a probability measure on the configurations ω P Ω N with constant energy β´1 ą 0 such that the measure is stationary with respect to the Hamiltonian flow. The corresponding probability measure denoted by µ mc N,β is the normalized uniform probability measure on the sphere , .
-. Now, for β´1 ą 0 fixed, we disintegrate the microcanonical measure µ mc N,β on S N,β . Let G be the group generated by the following matrices: the permutation matrices P σ , defined for any permutation σ of t1, ..., Nu by 0 otherwise, and the sign matrices S k , defined for k P t1, ..., Nu by The group G acts on S N,β . For x P S N,β , we denote by G N,β x the orbit of x " px 1 , ..., x N q under the action of G. More precisely, and each orbit is finite, with cardinality 2 NpN`1q{2 . This group action defines a projection Then, the disintegration of µ mc N,β with respect to π writes as follows: for all test functions It is not difficult to see that µ N,β,x p¨q :" µ mc N,β p¨|xq is the uniform measure on the orbit G N,β x , since its support is invariant under a subgroup of rotations (of the total sphere). Let us denote by x¨y N,E,x the corresponding expectation. We obtain that, for all test functions f : S N,β ÝÑ , To conclude, let us fix the energy β´1 ą 0, take x in the microcanonical sphere S N,β , and look at the dynamics generated by L N restricted on the orbit G N,β x . Then, observe that the kernel of S N in the Hilbert space L 2 pµ N,β,x q has dimension 1. As a result, the range of S N in L 2 pµ N,β,x q has codimension 1, and is equal to the mean-zero functions.
In the sequel, we give some properties of the two spaces C 0 and Q 0 : for instance, the energy current is among the elements of Q 0 (Proposition 4.2). We also prove an integration by parts formula for the functions of C 0 (Proposition 4.3). Moreover, the following elements belong to Q 0 :
Properties of
Proof. The first statement is straightforward as a consequence of the definition. Besides, paq is directly obtain from the following identities: for x P , and k ě 1, Then, if f P Q, it is easy to see that (14) and (15) are sufficient to prove pbq. For instance, l Conversely, let us now consider a function ϕ P C 0 . From the previous subsection together with Proposition 4.2, we can write the cylinder function ϕ as ϕ " p´S Λ ϕ qp´S Λ ϕ q´1ϕ for some mean-zero function p´S Λ ϕ q´1ϕ, measurable with respect to the variables m x , ω x ; x P Λ ϕ ( . The reversibility of the measure µ ℓ,β,x implies that the following decomposition holds in L 2 pµ ℓ,β,x q: The following proposition is a direct consequence of these comments.
for all rectangles Λ ℓ that contain Λ ϕ , for all β ą 0, and for all functions g P L 2 pµ ℓ,β,x q. For all y P , where Cpϕ, β, xq is equal to By reintegration of the desintegrated measure µ ℓ,β,x the same result (16) may be restated with microcanonical measures µ ℓ,β,x replaced by µ β and ‹ β and (17) becomes: for some constants C 1 , C 2 which can be written in terms of variances and do not depend on β.
Proof. Inequalities (17) and (18) follow from Cauchy-Schwarz inequality applied to (16). Let us notice that the last inequality (19) uses the translation invariance of the measure ‹ β . l ) .
This result can be restated with
Proof. This follows from the decomposition of every function in L 2 pµ β q over the Hermite polynomials basis: see Proposition A.3 in Section A. l
Limiting variance and semi-norm
In this subsection, we obtain a variational formula for the variance where ϕ P Q 0 and ℓ ϕ " ℓ´s ϕ´1 . We first introduce a semi-norm on Q 0 . For any cylinder function ϕ in Q 0 , let us definẽ As we previously noticed, this formula can be restated as Since ϕ belongs to Q 0 , the results of Subsection 4.2 are valid, namely: the first term in the right-hand side of (22) Here, ℓ ϕ stands for ℓ´s ϕ´1 so that the support of τ x ϕ is included in Λ ℓ for every x P Λ ℓ ϕ .
The proof is done in two steps that we separate as two different lemmas for the sake of clarity. In the first lemma, we bound the variance of a cylinder function ϕ P Q 0 , with respect to the canonical measure µ β , by the semi-norm~ϕ~2 β . In the second step, a lower bound for the variance can be easily deduced from the variational formula which expresses the variance as a supremum.
Proof. We follow the proof given in [14], Lemma 4.3 and we assume first that ϕ " ∇ 0 pFq`∇ 0,1 pGq, for two quadratic cylinder functions F, G. Then, the general case follows by linearity. We write the variational formula Since ϕ is quadratic, we can restrict the supremum in the class of quadratic functions h that are localized in Λ ℓ (the proof of that statement is detailed in Proposition A.3). It turns out that we can also restrict the supremum to functions h such that " D ℓ pµ β ; hq ı ď Cℓ. This follows from the fact that the first term can be bounded as follows (according Proposition 4.3 in addition to the convexity of the Dirichlet form): Recall that C ϕ is a constant that depends on ϕ. Next, we want to replace the sums over Λ ℓ ϕ with the same sums over Λ ℓ (recall that ℓ ϕ " ℓ´s ϕ´1 ď ℓ). For that purpose, we denote First of all, from Cauchy-Schwarz inequality, we have Then, we also can write as befor졡ˇˇˇˇ These last two inequalities give the upper bound Let us choose a sequence th ℓ u satisfying " D ℓ pµ β ; h ℓ q ı ď Cℓ. Then, the sequence tζ ℓ 0 ph ℓ q, ζ ℓ 1 ph ℓ qu is uniformly bounded in L 2 p ‹ β q, and this implies the existence of a weakly convergent subsequence. We denote by pζ 0 , ζ 1 q a weak limit and assume that the sequence tζ ℓ 0 ph ℓ q, ζ ℓ 1 ph ℓ qu weakly converges to pζ 0 , ζ 1 q. The conclusion is now based on the weak version of closed forms result that we prove in Appendix B, Theorem B.1: the pair pζ 0 , ζ 1 q can be written in L 2 p ‹ β q as with g P Q and a P . We have obtained that The inequality above is a consequence of the following fact: the L 2 -norm may only decrease along weakly convergent subsequences. The result follows, after recalling (21). l
LEMMA 4.7. Under the assumptions of Theorem 4.5,
Proof. We define, for f P Q, Spτ y f q.
The following limits hold: We only prove (24), the other relations can be obtained in a similar way. As previously, we assume for the sake of simplicity that ϕ " ∇ 0 pFq`∇ 0,1 pGq. We recall the elementary identity Therefore, The last limit comes from Proposition 2.1 and the fact that ℓ ϕ " ℓ´s ϕ´1 . Then, we obtain from the variational formula written with h " p´S Λ ℓ q´1paJ ℓ`H f ℓ q: The result follows after taking the supremum on f P Q, and recalling (21). l
Proof of Theorem 3.3
In this paragraph, we prove Theorem 3.3 by using the central limit theorem variances given in Theorem 4.5. First, we show how to relate (10) to such variances.
The previous result is proved for example in [11] (Section 2, Lemma 2.4). We are going to use this bound for functions of type ř x Gpx{Nqτ x ϕ, where ϕ belongs to Q 0 . The main result of this subsection is the following. THEOREM 4.9. Let ϕ P Q 0 , and G a smooth function on . Then, Proof. From Proposition 4.8, the left-hand side of (29) is bounded by that can be written with the variational formula as -.
Since ϕ P Q 0 , from Proposition A.3 we can restrict the supremum over f P Q. Proposition 4.3 gives β and by Cauchy-Schwarz inequality, The supremum on f can be explicitely computed, and gives the final bound We are now going to show that the constant on the right-hand side is proportional to~ϕ~2 β . For that purpose, we average on microscopic boxes: for k ! N, we denote and we want to substitute The error term that appears is estimated by From (30), the expression above is bounded by Ck{N, and then vanishes as N ÝÑ 8. We are reduced to estimate C sup By the same argument, this is bounded by The supremum on f can be explicitely computed, and gives the final bound Taking the limit as N ÝÑ 8 and then k ÝÑ 8, we obtain (29) from the central limit theorem for variances at equilibrium (Theorem 4.5). l We apply Theorem 4.9 to I 1,N t,m, f pHq, and we get lim sup NÝÑ8 C sup
Hilbert space and projections
We now focus on the semi-norm~¨~β that was introduced in the previous section by (20). We can easily define from~¨~β a semi-inner product on C 0 through polarization. Denote by N the kernel of the semi-norm~¨~β on C 0 . Then, the completion of Q 0 | N denoted by H β is a Hilbert space. Let us explain how the well-known Varadhan's approach is modified. Usually, the Hilbert space on which orthogonal projections are performed is the completion of C 0 | N , in other words it involves all local functions. Then, the standard procedure aims at proving that each element of that Hilbert space can be approximated by a sequence of functions in the range of the generator plus an additional term which is proportional to the current. A crucial step for obtaining this decomposition consists in: first, controlling the antisymmetric part of the generator by the symmetric one for every cylinder function, and second, proving a strong result on germs of closed forms (see Appendix B). These two key points are not satisfied in our model, but they can be proved when restricted to quadratic functions. It turns out that these weak versions are sufficient, since we are looking for a fluctuation-dissipation approximation that involves quadratic functions only.
In Subsection 5.1, we show that H β is the completion of SQ| N`t j S 0,1 u. In other words, all elements of H β can be approximated by a j S 0,1`S g for some a P and g P Q. This is not irrelevant since the symmetric part of the generator preserves the degree of polynomial functions. Moreover, the sum of the two subspaces t j S 0,1 u and SQ| N is orthogonal, and we denote it by Nevertheless, this decomposition is not satisfactory, because we want the fluctuating term to be on the form L m p f k q, and not Sp f k q. In order to make this replacement, we need to prove the weak sector condition, that gives a control of~A m g~β by~S g~β , when g is a quadratic function. The argument is explained is Subsection 5.2 and 5.3, and the weak sector condition is proved in Appendix C. The only trouble is that this new decomposition is not orthogonal any more, so that we can not express the diffusion coefficient as a variational formula, like (36). This problem is solved in Section 6.
Decomposition according to the symmetric part
We begin this subsection with a table of calculus, very useful in the sequel.
Proof. The first two identities are direct consequences of Theorem 4.5 and of Equality (27). The last two ones follow directly. l COROLLARY 5.2. For all a P and g P Q, In particular, the variational formula for~h~β , h P Q 0 , writes Proof. We divide the proof into two steps.
(a) The space is well generated -The inclusion SQ| N`t j S 0,1 u Ă H β is obvious. Moreover, from the variational formula (31) we know that: if h P H β satisfies ! h, j S 0,1 " β " 0 and ! h, S g " β " 0 for all g P Q, then~h~β " 0.
(b) The sum is orthogonal -This follows directly from the previous proposition and from the fact that: ! j S 0,1 , Sh " β " 0 for all h P Q. l
Replacement of S with L
In this subsection, we prove identities which mix the antisymmetric and the symmetric part of the generator, which will be used to get the weak sector condition (Proposition 5.7).
Proof. This easily follows from the first identity of Proposition 5.1 and from the invariance by translations of the measure ‹ β : l LEMMA 5.5. For all g P Q, ! S g, j A 0,1 " β "´! A m g, j S 0,1 " β .
Proof. By the first identity of Proposition 5.1, l Then, these two lemmas together with the second identity of Proposition 5.1 imply the following: We are now in position to state the main result of this subsection. Proof. The proof is technical because made of explicit computations for quadratic functions. For that reason, we report it to Appendix C. l
Decomposition of the Hilbert space
We now deduce from the previous two subsections the expected decomposition of H β .
PROPOSITION 5.8. We denote by L m Q the space tL m g ; g P Qu. Then, Proof. We first prove that H β can be written as the sum of the two subspaces. Then, we show that the sum is direct.
(a) The space is well generated -The inclusion L m Q| N`t j S 0,1 u Ă H β follows from Proposition 4.2. To prove the converse inclusion, let h P H β so that ! h, j S 0,1 " β " 0 and ! h, L m g " β " 0 for all g P Q. From Corollary 5.3, h can be written as h " lim kÝÑ8 S g k for some sequence tg k u P Q. More precisely, since ! S g k , A m g k " β " 0 by Lemma 5.4, Moreover, we also have by assumption that ! h, S g k " β " 0 for all k, and from Proposition 5.7, sup kP ~L m g k~β ď pCpβq`1q sup kP ~S g k~β ": C h pβq is finite. Therefore, The sum is direct -Let tg k u P Q be a sequence such that, for some a P , By a similar argument, where the last equality comes from the fact that ! j S 0,1 , S g k " β " 0 for all k. On the other hand, by Proposition 5.7,~L m g k~β ď pCpβq`1q~S g k~2 β . Then, a " 0. This concludes the proof. l Recall that j S 0,1 pm, ωq " λpω 2 1´ω 2 0 q. We have obtained the following result.
This concludes the first statement of Theorem 3.3. We prove the second statement (11) in Proposition 6.5 in Section 6.
On the diffusion coefficient
The main goal of this section is to express the diffusion coefficient in several variational formulas. We also prove the second statement of Theorem 3.3. First, recall Definition 2.2, which can be written as From Theorem 5.9, there exists a unique number D such that We are going to obtain a more explicit formula for that D, and relate it to (36), by following the argument in [14]. In Subsection 6.1, we first rewrite the decomposition of the Hilbert space given in Proposition 5.8, by replacing j S 0,1 with j 0,1 . This new statement is based on Corollary 5.6, which gives an orthogonality relation. The second step is to find an other orthogonal decomposition (see (37) below), which will enable us to prove the variational formula (36) for D. In Subsection 6.2, we study the convergence of the Green-Kubo formula given in (6), and then, in the last subsection, we investigate its behavior when the intensity of the exchange noise vanishes.
LEMMA 6.1. The following decompositions hold
Proof. We only sketch the proof of the first decomposition, since it is done in [14]. Let us recall from Proposition 5.8 that L m Q has a complementary subspace in H β which is one-dimensional. Therefore, it is sufficient to prove that H β is generated by L m Q and the total current. Let h P H β such that ! h, j 0,1 " β " 0 and ! h, L m g " β " 0 for all g P Q. By Corollary 5.3, h can be written as h " lim kÝÑ8 S g k`a j S 0,1 for some sequence tg k u P Q, and a P , and from Corollary 5.6, h~2 β " lim kÝÑ8 ! a j S 0,1`S g k , a j 0,1`L m g k " β .
Moreover, from Proposition 5.7, The same arguments apply to the second decomposition. l We define bounded linear operators T, T ‹ : H β ÝÑ H β as From the following identitỹ we can easily see that T ‹ is the adjoint operator of T and we also have the relations In particular, and there exists a unique number Q such that We are going to show that D " λQ.
Proof. The first identity follows from the fact that The second identity is obtained from the following statement l After an easy computation, we can also prove that ! Tg, g " β "! Tg, Tg " β for all g P H β . Since j S 0,1´T j S 0,1 is orthogonal to T j S 0,1 , we have: By the fact we obtain the variational formula for~T j S 0,1~β : Proof. With a similar argument (in the proof of the previous proposition), we have which concludes the proof. l THEOREM 6.4.
Proof. By the definition, j 0,1´D j S 0,1 {λ P L m Q and therefore So, D " λQ, and the variational formula for D can be deduced from the one for Q. l REMARK 6.1. We can rewrite the variational formula for D as: We use the fact that in (43), we can restrict the infimum on functions f satisfying ! j A 0,1´A m f , j S 0,1 " β " 0. Let us notice that (44) and (45) recover the variational formula (36). Then, the result follows from D " λQ " χpβq (46) l
Convergence of Green-Kubo formula
Linear response theory predicts that the diffusion coefficient is given by the homogenized Green-Kubo formula, defined asκ where !¨" β,‹ is the inner product defined by (4). The Laplace transform is defined and is smooth on p0,`8q, and can be rewritten: The forthcoming theorem is proved in Appendix D, by considering the resolvent equation exists, and is finite.
Let us recall the link between the variational formula in Definition 2.2 and the Green-Kubo formula (see the end of Subsection 2.4). Since (47) converges as z goes to 0, it follows that D "D.
Vanishing exchange noise
With the same ideas of the previous subsection, it can be easily shown that the homogenized Green-Kubo formula also converges if the strength λ of the exchange noise vanishes. The aim of this paragraph is to study the limit of (48) as λ goes to 0. First, we turn (47) into a new definition that highlights the dependence on λ ą 0. For that purpose we introduce new notations: we denote S 0 " γS flip , S λ " S 0`λ S exch , and then Proof. The proof is divided into two steps. For the sake of readability, we erase the notation m in J 0 pmq, and keep in mind its dependence on the disorder. We also write L 2 ‹ for L 2 p!¨" β,‹ q.
Step 1 -Convergence of the diffusion coefficient. Let us denote by h z,0 and h z,λ the two solutions of the following resolvent equations in L 2 ‹ : pz´L m 0 qh z,0 " J 0 , pz´L m,‹ λ qh z,λ " J 0 . We look at the following difference, for λ, z ą 0 fixed,ˇˇ!
To complete the proof, we are reduced to show that λˇˇ! h z,λ , S exch ph z,0 q " β,‹ˇv anishes when we first let z ÝÑ 0 and then λ ÝÑ 0. For that purpose, we need more precise information on the two solutions h z,λ and h z,0 . Since the generator L m λ (resp. L m 0 ) conserves the degree of homogeneous polynomial functions, we know that the solution of the resolvent equation h z,λ (resp. h z,0 ) has to be homogeneous polynomial of degree two, precisely: where φ z,λ pm,¨,¨q : 2 ÝÑ is a square integrable symmetric function. Every degree two function h can be written as h " h "`h‰ , where h " belongs to the subspace Q " generated by tω 2 x , x P u and h ‰ belongs to the subspace Q ‰ generated by tω x ω y , x ‰ yu. These two subspaces of L 2 ‹ have the following properties: (i) Q " and Q ‰ are orthogonal in L 2 ‹ . (ii) Q " and Q ‰ are stable by S exch .
(iii) If h P Q " , then for all g P L 2 ‹ , ! S exch phq, g " β,‹ " 0. As a result, the two solutions h z,λ and h z,0 write and from the previous remarks we get according to the Cauchy-Schwarz inequality for the scalar product !¨, p´S exch q¨" β,‹ . We treat separately the two terms into the two lemmas below. We prove that the first term is bounded by C{ ? λ, and the second one is uniformly bounded for λ, z ą 0. Here we state the two lemmas: LEMMA 6.8. There exists a constant C ą 0 such that, for all z, λ ą 0, LEMMA 6.9. There exists a constant C ą 0 such that, for all z ą 0, From these statements we deduce λˇˇ! h z,λ , S exch ph z,0 q " β,‹ˇď C 0 ? λ where C 0 does not depend on λ, z ą 0, and Theorem 6.7 follows.
Step 2 -Proofs of the two lemmas. We begin with the proof of Lemma 6.8. We recall the resolvent We multiply (52) by h z,λ and integrate with respect to !¨" β,‹ , in order to get The right-hand side rewrites as p2γq´1 ! p´S 0 qpJ 0 q, h z,0 " β,‹ .
We now turn to Lemma 6.9. We prove a general result, precisely: there exists a constant C ą 0 such that, for all g P Q ‰ , ! g, p´S exch qg " β,‹ ď C ! g, g " β,‹ .
This fact is proved through explicit computations. Let us write g P Q ‰ in the form A straightforward computation gives that In the last inequality, we use the fact that the measure on the disorder is translation invariant and that pa´bq 2 ď 2pa 2`b2 q for all a, b P . Besides, one can also check that thanks to the translation invariance of . The bound (53) follows directly, with C " 4. To prove Lemma 6.9, it remains to show that ! h ‰ z,0 , h ‰ z,0 " β,‹ is uniformly bounded in z. We recall the resolvent equation in L 2 ‹ : zh z,0´p S 0`A m qh z,0 " J 0 .
Notice that, with the decomposition (51), we can write S 0 ph z,0 q "´2γh ‰ z,0 . We multiply (54) by h z,0 and integrate with respect to !¨" β,‹ , in order to get As previously, Cauchy-Schwarz inequality for the scalar product !¨, p´S 0 q¨" β,‹ on the right-hand side gives
The anharmonic chain perturbed by a diffusive noise
In this last main section we say a few words about the anharmonic chain, meaning that the interaction between atoms are non linear, and given by a potential V. As in [14], we assume that the function V : ÝÑ `s atisfies the following properties: (i) Vp¨q is a smooth symmetric function, (ii) there exist δ´and δ`such that 0 ă δ´ď V 2 p¨q ď δ`ă`8, Using the same notations as in the introduction, the configuration tp x , r x u now evolve according to We define π x :" p x { ? M x , and the dynamics on tπ x , r x u rewrites: The total energy is conserved. The flip and exchange noises have poor ergodicity properties, and can be used for harmonic chains only. For the anharmonic case, we introduce a stronger stochastic perturbation. Now, the total generator of the dynamics writes L m " A m`γ S, where where Y x, y " π x B r y´V 1 pr y qB π x , and X x " Y x,x . For this anharmonic case, the two needed ingredients can be proved directly from [14]. First, notice that the symmetric part of the generator does not depend on the disorder and is exactly the same as in [14]: the proof of the spectral gap is done in Section 12 of this paper. The sector condition can also be proved by inspiring from [14]. After taking into account the random environment and its fluctuation, the same argument of Lemma 8.2, Section 8 can be applied. Indeed, it is mainly based on the fact that the antisymmetric part of the generator can be written in terms of the symmetric one.
A.1 Hermite polynomials on
Let χ be the set of positive integer-valued functions ξ : ÝÑ , such that ξ x vanish for all but a finite number of x P . The length of ξ, denoted by |ξ|, is defined as For ξ P χ, we define the polynomial function on Ω where th n u nP are the normalized Hermite polynomials w.r.t. the centered one-dimensional Gaussian law with variance β´1. The sequence tH ξ u ξPχ forms an orthonormal basis of the Hilbert space L 2 pµ β q, where µ β is the infinite product Gibbs measure defined by (2). As a result, every function f P L 2 pµ β q can be decomposed in the form f pωq " ÿ ξPχ FpξqH ξ pωq.
Moreover, we can compute the scalar product x f , gy β for f " ř ξ FpξqH ξ and g " DEFINITION A.1. We denote by χ n Ă χ the subset sequences of length n, i.e. χ n :" ξ P χ ; |ξ| " n ( . A function f P L 2 pµ β q is of degree n if its decomposition f " ÿ ξPχ FpξqH ξ satisfies: Fpξq " 0 for all ξ R χ n .
It is not hard to check the following proposition: If a local function f P L 2 pµ β q is written in the form f " ř ξPχ FpξqH ξ , then where S is the operator acting on functions F : χ ÝÑ as Here, ξ x, y is obtained from ξ by exchanging ξ x and ξ y .
From this result we deduce: COROLLARY A.2. For any f " ř ξPχ FpξqH ξ P L 2 pµ β q, we have
A.2 Dirichlet forms and weakly sequences of quadratic functions
In this subsection we focus on the set of quadratic functions in L 2 pµ β q, namely degree two functions, that we denote by Q. We first restrict a variational formula to this class of functions, and then we study sequences of functions that weakly converge in L 2 . PROPOSITION A.3. If f P L 2 pµ β q is quadratic in the sense above, then the following variational formula can be restricted over quadratic functions g.
Proof. This fact follows after decomposing g as ř ξPχ GpξqH ξ . Then, it is an easy consequence of Corollary A.2 and of the orthogonality of Hermite polynomials. l PROPOSITION A. 4. Let t f n u n be a sequence of quadratic functions in L 2 pµ β q. Suppose that t f n u weakly converges to f P L 2 pµ β q. Then, f is quadratic.
Proof. For all n P , and ξ R χ 2 , the scalar product @ f n , H ξ D β vanishes (by definition). From weak convergence, we know that as n goes to infinity, for all ξ P χ. This implies:
B A weak version of closed forms results
In that section we prove a theorem that should be thought as a kind of closed forms results, as they are stated in [17] or in [9] (Section A.3.4). We give the link between Theorem B.1 below and closed forms at the end of this paragraph.
B.1 Decomposition of quadratic functions
For the sake of clarity, we erase the dependence on the disorder m, and consider that the functions are defined on Ω, and square integrable w.r.t. the Gibbs measure µ β . We explain how to restate the same result for functions defined on Ω DˆΩ in Remark B.1. THEOREM B.1. Let t f n u nP a sequence of quadratic functions in L 2 pµ β q. Let us define g n :" ∇ 0´Γ f n¯a nd h n :" ∇ 0,1´Γ f n¯.
If tg n u, respectively th n u, weakly converges in L 2 pµ β q towards g, respectively h, then there exist a P and f P Q such that hpωq " apω 2 0´ω 2 1 q`∇ 0,1 pΓ f qpωq.
where ψ 1 , ψ 2 : 2 ÝÑ are square integrable symmetric functions. We are now going to give a list or equalities, being satisfied by the pair of sequences. Let us be more precise. We define, for a pair pf 1 , f 2 q of two L 2 pµ β q functions, the following identities, stated in L 2 pµ β q sense: It is straightforward to check that, for all n P , the pair pg n , h n q satisfies identities (R1-R3). Easily, one can show that the latter always take place after passing to the weak limit in L 2 pµ β q. Precisely, the weak limit pg, hq of tg n , h n u also satisfy (R1-R3). This follows from the following easy lemma (which is a consequence of the translation invariance of µ β ): LEMMA B.2. If tg n u n weakly converges in L 2 pµ β q towards g, then, for all x P , g n pω x q ( n weakly converges towards gpω x q, g n pω x,x`1 q ( n weakly converges towards gpω x,x`1 q. Notice that all equalities (R1-R3) turn into identities for ψ 1 and ψ 2 , defined in (60) and (61). Namely, ψ 1 and ψ 2 have to satisfy
(R2)
$ & % ψ 2 px, yq " 0 if x R t0, 1u and y R t0, 1u, The first two identities imply that g writes on the form and h rewrites as whereas the final equality makes a connection between g and h. In view of (58) and (59), we are going to need the following straightforward lemma: • the price to flip ω x when the configuration is ω should be equal to´f 1 x pω x q : this is (R1), • the price to exchange ω x and ω x`1 when the configuration is ω should also be equal to´f 2 x pω x,x`1 q : this is (R2).
In the context of interacting particle systems, closed forms are expected to give the same price for any 2-step path with equal end points. In our setting, the last equality (R3) can be translated into: "The quantity at site x is flipped, and then exchanged with the quantity at site x`1. Equally, the quantities at site x and x`1 are exchanged first, and then the quantity at site x`1 is flipped." There are three other such paths, that we do not need in our result: • two quantities are exchanged at sites x, x`1, and also independently at sites y, y`1, with tx, x`1u X t y, y`1u " H, • two quantities are flipped independently at sites x and y, with x ‰ y, • the quantity at site x is flipped, and then the quantities at sites y and y`1 are exchanged, for y R tx, x`1u, and the converse is also possible.
Recall that we have defined Ω :" . We denote by B the space of real-valued functions B :" t f : Ω ÝÑ u.
We are now interested in the space of forms, which are defined as pf 1 x , f 2 x q xP where f 1 x P B, and f 2 x P B, for every x P . To each function F : Ω ÝÑ is associated a form: x , f 2 x q xP is an exact form if there exists a continuous function F : Ω ÝÑ such that @ x P , @ ω P Ω, # f 1 x pωq " Fpω x q´Fpωq, f 2 x pωq " Fpω x,x`1 q´Fpωq.
Easily, one can prove that all exact forms are closed forms. We now present two examples of closed forms that play a central role.
EXAMPLE B.1. We denote by a " pa 1 , a 1 q the closed form defined by # a 1 x pωq " 0, a 2 x pωq " ω 2 x´ω 2 x`1 , for all x P and configurations ω P Ω. This closed form corresponds to the formal function Fpωq " ř x xω 2 x , but this is not an exact form.
EXAMPLE B.2. Let h be a cylinder function. Let us recall that we denote by Γ h the formal sum ř x τ x h, and define u h " pu 1 h , u 2 h q as # pu 1 h q x pωq " Γ h pω x q´Γ h pωq, pu 2 h q x pωq " Γ h pω x,x`1 q´Γ h pωq, for all x P , and configurations ω P Ω. Though ř x τ x h is a formal sum, these two equalities are well defined. Let us notice that u h is a closed form that is not exact, unless h is constant.
These two examples show that closed forms on Ω are not always exact forms. Let us introduce the notion of a germ of a closed form. Examples B.1 and B.2 provide two types of germs of closed forms. Consider the cylinder function Apωq " p0, ω 2 0´ω 2 1 q. The collection pτ x Aq xP is the closed form a of Example B.1. For a cylinder function h, the collection p∇ x Γ h , ∇ x,x`1 Γ h q xP obtained through translations of the cylinder function p∇ 0 Γ h , ∇ 0,1 Γ h q is the closed from of Example B.2. For a pair of L 2 p ‹ β q-functions f " p f 1 , f 2 q, we called it a germ of closed form if f " pτ x f q xP satisfies all of conditions as a closed form in L 2 p ‹ β q-sense. Usually, Theorem B.1 is replaced with a similar result that concerns every germ of closed form in L 2 p ‹ β q: see Theorem 5.1 in [17] or Theorem A.3.4.14 in [9].
C Proof of the weak sector condition
In this section we prove Proposition 5.7 that we recall here for the sake of clarity.
(ii) There exists a positive constant Cpβq such that, for all g P Q, A m g~β ď Cpβq~S g~β .
As a result, the variational formula (31) for~A m g~2 β gives: The result is proved. l
D Convergence of Green-Kubo formulas
In this section we prove Theorem 6.6, that we recall here for the sake of clarity: THEOREM D.1. The following limit lim zÝÑ0 zą0 ! j A 0,1 , pz´L m q´1 j A 0,1 " β,‹ exists, is positive and finite.
Proof. We recall that the completion of the space of square integrable local functions w.r.t. !¨" β,‹ is denoted by L 2 ‹ , and we also have defined the quantity Lpzq :" β 2 2 ż`8 0 e´z t ! j A 0,1 ptq, j A 0,1 p0q " β,‹ dt, which is well-defined on p0,`8q. We recall that h z :" h z pm, ω; βq is the solution of the resolvent THEOREM E.1. For almost all realization of the disorder m P Ω D , the sequence tY N m u Ně1 is tight in Dpr0, Ts, M 1 q.
Let us remind the decomposition of Y N t,m given in (9): | 15,965.6 | 2014-02-14T00:00:00.000 | [
"Physics"
] |
Pressure-tuning the quantum spin Hamiltonian of the triangular lattice antiferromagnet Cs2CuCl4
Quantum triangular-lattice antiferromagnets are important prototype systems to investigate numerous phenomena of the geometrical frustration in condensed matter. Apart from highly unusual magnetic properties, they possess a rich phase diagram (ranging from an unfrustrated square lattice to a quantum spin liquid), yet to be confirmed experimentally. One major obstacle in this area of research is the lack of materials with appropriate (ideally tuned) magnetic parameters. Using Cs2CuCl4 as a model system, we demonstrate an alternative approach, where, instead of the chemical composition, the spin Hamiltonian is altered by hydrostatic pressure. The approach combines high-pressure electron spin resonance and r.f. susceptibility measurements, allowing us not only to quasi-continuously tune the exchange parameters, but also to accurately monitor them. Our experiments indicate a substantial increase of the exchange coupling ratio from 0.3 to 0.42 at a pressure of 1.8 GPa, revealing a number of emergent field-induced phases.
T he interplay between geometrical frustration, quantum fluctuations, and magnetic order is one of the central issues in condensed matter physics. In 1973, developing the resonating valence bond (RVB) theory, Anderson proposed that quantum fluctuations in magnetic structures on an isotropic triangular lattice can be sufficiently strong to destroy the magnetic order, resulting in a two-dimensional (2D) fluid of mobile spin pairs correlated together into singlets 1 . This state was introduced as a RVB quantum spin liquid, contrary to the valence-bond solid (VBS), with the ground state condensed into a spin lattice. The Anderson's hypothesis has triggered a cascade of extensive theoretical and experimental studies, resulting in the discovery of numerous exotic quantum states and highly unusual fieldinduced phenomena 2 .
The spin-1/2 triangular-lattice Heisenberg antiferromagnet (AF) represents one of the most important groups of the family of low-D quantum frustrated magnets. For the general case of spatially anisotropic triangular AF, the spin Hamiltonian is given as where S i , S j , and S j′ are spin-1/2 operators at sites i, j, and j′, and J and J′ are the exchange interactions on the horizontal and diagonal bonds, respectively (Fig. 1a, inset). In spite of this simple model, such systems are shown to possess a very rich and not fully understood phase diagram, which can be interpolated between decoupled spin-chain (J′ = 0), isotropic triangular (J′/J = 1), and unfrustrated square (J = 0) lattices. It is expected that transitions from one state to another occur in between these well-defined cases, but many details of this evolution (e.g., critical coupling ratios) still remain a matter of debate 3,4 . The magnetic phase diagram predicts a variety of exotic phases, with the 1/3 saturation-magnetization plateau as the most exciting magnetic property 5 .
The largest hindrance to experimentally check theoretical predictions on the unusual magnetic properties of spin-1/2 triangular lattice Heisenberg AFs is the very limited number of materials with appropriate (ideally tuned) sets of parameters, currently available for measurements. In spite of the recent progress in synthesizing spin-1/2 triangular-lattice materials (see e.g., ref. 2 and references therein), the two compounds, Cs 2 CuCl 4 and Cs 2 CuBr 4 (with J′/J ≃ 0.30 and 0.41, respectively 6 ), remain among the most prominent representatives of this family of frustrated materials. One obvious approach to tune the spin Hamiltonian of these systems is to vary their chemical composition 7,8 . However, experiments on the solid solution Cs 2 CuCl 4−x Br x (with Br content ranging from 0 to 4) revealed a pronounced difference in the Cu coordination when increasing x, resulting in a discontinuous evolution of its crystal structure 9 .
The high-pressure technique is known as a powerful means to modify magnetic properties and parameters of exchange coupled spin systems (see e.g., refs. [10][11][12][13][14][15][16][17][18][19] ). On the other hand, another important task is to precisely measure these parameters. This becomes particularly challenging for low-D spin systems, whose spin Hamiltonian is strongly affected by quantum fluctuations. One solution to solve this problem is to suppress quantum fluctuations by strong-enough magnetic fields, and then to use the harmonic spin-wave theory for a description of the excitation spectrum 6,20 .
Here, we combine high-pressure high-field electron spin resonance (ESR) and radio frequency (r.f.) susceptibility measurements, allowing us not only to quasi-continuously change the exchange parameters J and J′, but also to accurately monitor them. We use Cs 2 CuCl 4 as a model system. We show that the application of pressure increases significantly the exchange coupling parameters in this compound, triggering, at the same time, the emergence of field-induced low-temperature magnetic phases, absent at zero pressure.
Results
High-pressure ESR measurements. To determine the dependence of the coupling parameters of Cs 2 CuCl 4 on the applied pressure, we used the procedure employed in ref. 6 , when the excitation spectrum is measured above the saturation field H sat . In the case of the staggered Dzyaloshinskii-Moriya (DM) interaction, the ESR spectrum should consist of two modes, which correspond to magnetic excitations at the center and at the boundary of the unfolded Brillouin zone (a.k.a. the relativistic and exchange modes, respectively). Such modes were previously observed in Cs 2 CuCl 4 6 (black symbols in Fig. 1a). The field dependence of the relativistic mode A for H ≳ J=gμ B can be described using the equation ℏω A = gμ B H, where ℏ is the reduced Planck constant, ω is the excitation frequency, μ B is the Bohr magneton, and g = 2.06 is the g factor (the fit results are shown in Fig. 1a by the black dashed line). On the other hand, the frequency-field diagram of mode B can be described using Fig. 1a) with the same g factor as for the mode A. Most importantly, the difference between the excitation energies for modes A and B (Δω AB ≡ Δ B ) is determined by J′: J′ = ℏΔω AB /4, allowing us to measure J′ directly. The experiment revealed a shift of the mode B towards higher field when the pressure is applied. The pressure dependence of J′ is shown in Fig. 2a, evident in a significant, almost 70%, increase of J′ at 1.92 GPa.
High-pressure TDO measurements. Knowing J′ and the saturation field H sat , we can determine J, using the expression gμ B H sat = 2J (1 + J′/2J) 2 . To measure the saturation field of Cs 2 CuCl 4 , we employ a tunnel-diode-oscillator (TDO) technique (see Methods). The variations of the TDO circuit resonant frequency Δf/f (which is proportional to the magnetic susceptibility) as a function of the magnetic field applied along the b axis at different pressures are shown in Fig. 3a. In strong magnetic fields, the TDO frequency is almost constant, indicating the transition of Cs 2 CuCl 4 into the fully spin-polarized phase with saturated magnetization 21 . The experiment revealed that with increasing pressure the saturation field moves toward higher magnetic fields. The dependence of H sat on the applied pressure is shown in Fig. 3b. Based on the combined ESR and TDO data, for zero pressure we obtained J′/k B = 1.38 K and J/k B = 4.66 K (J′/J ≃ 0.3), which perfectly agrees with the previous estimates 6 . Results of a linear fit to the J′ dependence (dashed line in Fig. 2a) were used to calculate J at different pressures. J′, J, and J′/J as functions of the applied pressure are shown in Fig. 2. The J′/J dependence can be described using the empirical equation Fig. 2b), where P is the applied pressure (GPa). For 1.8 GPa, we obtained J′/k B = 2.28 K, J/k B = 5.47 K, and J′/J ≃ 0.42, indicating a remarkable, by 40%, increase of the J′/J ratio. Based on this fit, the application of a pressure of 3.6 GPa (where Cs 2 CuCl 4 undergoes a structural phase transition 22 ) would allow one to reach J ′ =J ' 0:53 (which corresponds to approximately 180% of the zero-pressure value).
Discussion
Apart from the shift of the saturation field, our experiment revealed a number of magnetic anomalies, which are absent in Cs 2 CuCl 4 at zero pressure (Fig. 3a). The observed magnetic anomalies can be caused by changes in the dynamics of critical fluctuations in the vicinity of field-induced phase transitions 23 , resulting in changes of real and imaginary components of the magnetic susceptibility. Although no signature of the 1/3 magnetization plateau was revealed, our observation (Fig. 3b) resembles the cascade of field-induced phase transitions in quasi-2D Cs 2 CuBr 4 24 , evident of a complex picture of magnetic interactions, including different perturbation terms (a remarkable sensitivity of the magnetic phase diagrams of Cs 2 CuCl 4 to the direction of the applied magnetic field 21 strongly suggests an important role not only spatial (J′ ≠ J), but also spin-space (asymmetric DM interaction) components of the magnetic anisotropy; the latter appear to be of the same order of magnitude as the interplane exchange interaction J″ 20 , inducing strongly relevant perturbations 25 ).
For the magnetic field applied along the b axis, the zeropressure magnetic phase diagram contains four low-temperature phases 21 . At small field below T N = 0.62 K, the system is in the incommensurate phase with a spiral ground state 26 dominantly determined by the DM anisotropy ("DM spiral") 25 . In this phase, the spins are located almost in the b-c plane with the spiral propagating along the b axis 26 . Remarkably, at about 2.3 T the effect of the DM interaction becomes irrelevant and the system undergoes a transition into the commensurate coplanar AF phase with spins more correlated in a-b planes (the corresponding correlations are determined by J″ and J) 25 . These two magnetic phases are stabilized by quantum fluctuations. The commensurate coplanar AF state is realized in a relatively wide field range, followed by two successive high-field transitions: into the noncoplanar cone phase and then, with further increase of the applied magnetic field, into the fully spin-polarized magnetically saturated phase (both phases are favored classically).
What happens when pressure is applied? Apart from the shift of the saturation field, our experiment revealed a number of magnetic transitions, absent at zero pressure (Fig. 3a). The proposed magnetic phase diagram for 1.8 GPa is shown in Fig. 4. Similar to that at zero pressure, at low field the system is in the DM spiral phase. The DM spiral phase is suppressed by the magnetic field at about 2.2-2.6 T (the anomaly A in Fig. 3a corresponds to this transition), resulting in the commensurate coplanar AF phase with spins predominantly correlated in the a-b plane. Applied pressure makes the J′ term more and more relevant, tending to suppress the coplanar nature of magnetic correlations. As a combined effect of the applied magnetic field (partially suppressing quantum order) and pressure (enhancing the interplane correlations), at about 6.9 T the system undergoes a transition into a noncoplanar (presumably) frustrated phase. The observed anomaly B corresponds to this transition.
For a spatially anisotropic triangular lattice AF in magnetic fields near the saturation, theory 27 predicts a particular rich phase diagram, with ground states ranging from an incommensurate noncoplanar chiral cone to a commensurate coplanar V state. The transformation between these two states involves two intermediate phases. One of them is a coplanar incommensurate ARTICLE order, while another one is a noncoplanar double-Q spiral order (double-cone state). The latter is characterized by the broken Z 2 symmetry between two magnon condensates at ±Q (where Q is the ordering wave vector) and can coexist with the single-cone phase in a relatively narrow range of J′/J, but at smaller fields. In Cs 2 CuCl 4 at zero pressure, the transition into the single-cone phase was revealed between 8 and 9 T below 300 mK 21 . Due to the increase of exchange coupling parameters, the applied pressure shifts the upper boundary of the temperature-field phase diagram to higher temperatures. Because of that, the transition into the single-cone phase can be observed at higher temperatures. Based on this assumption, the anomalies C and D (Fig. 3). can be interpreted as transitions into the double-and single-cone phases, respectively (Fig. 4). A tiny feature immediately before saturation might indicate the involvement of other higher-order perturbation factors (e.g., next-nearest-neighbor interactions 28 or the interplane frustration mentioned above 29 ). Our observations call for systematic high-pressure magnetostructural (such as nuclear magnetic resonance and neutron diffraction) studies of Cs 2 CuCl 4 , which would allow one to verify the proposed phase diagram. Apart from exact identification of the nature of the observed high-pressure phases, another important task would be the search for the field-induced 1/3 magnetization plateau, which can be expected with further increase of J′/J moving the system towards the isotropic (J′/J = 1) limit 27 . It would be also very interesting to measure the pressure-driven evolution of the spin Hamiltonian in the isostructural compound Cs 2 CuBr 4 and to compare the results with that in Cs 2 CuCl 4 .
To conclude, we demonstrated an effective strategy to control the spin Hamiltonian of a spin-1/2 antiferromagnet on a triangular lattice with hydrostatic pressure. With increasing pressure, for Cs 2 CuCl 4 our experiments revealed a substantial increase of the exchange coupling parameters, accompanied by the emergence of (at least) two field-induced phases. These phases can be tentatively interpreted as noncoplanar frustrated and double-cone states, merging the low-field commensurate coplanar and highfield single-cone phases revealed previously. Our approach provides robust means for investigating the complex interplay between geometrical frustration, quantum fluctuations, and magnetic order (especially, close to quantum phase transitions), paving the way towards controlled manipulation of the spin Hamiltonian and magnetic properties of frustrated spin systems.
Methods
Single-crystal growth. Single-crystal samples of Cs 2 CuCl 4 were grown by the slow evaporation of an aqueous solution of CsCl and CuCl 2 in the mole ratio 2:1.
High-pressure TDO. High-pressure TDO measurements were conducted at the National High Magnetic Field Laboratory (Florida State University) in magnetic fields up to 18 T using a TDO susceptometer 14,15,30 tuned to operate at a resonant frequency of 51 MHz. Magnetic field was applied along the b axis of the crystal. A sample with a length of~1.5 mm was placed in a copper-wire coil with diameter 0.8 mm and height~1 mm. The coil and sample were surrounded with Daphne 7575 oil (Idemitsu Kosan Co., Ltd.) and encapsulated in a Teflon cup which was inserted into the bore of a piston-cylinder pressure cell constructed from a chromium alloy (MP35N). The coil acts as an inductor in a diode-biased self-resonant LC tank circuit. During the field sweep, changes in the sample magnetic permeability lead to changes in the inductance of the oscillator tank coil, and, hence, to changes in the TDO circuit resonant frequency Δf. The frequency changes were detected as a function of the magnetic field at different pressures. The pressure created in the cell was calibrated at room temperature and again at low temperature using the fluorescence of the R1 peak of a small ruby chip as a pressure marker 31 with accuracy better than ±0.015 GPa. The pressure cell was immersed directly into 3 He, allowing TDO measurements down to 350 mK. Temperature was measured using a calibrated Cernox thermometer. Transition fields were measured with accuracy better than ±0.5%.
High-pressure ESR. High-pressure ESR measurements of Cs 2 CuCl 4 were performed at the High Field Laboratory for Superconducting Materials, Institute for Material Research (IMR), Tohoku University using a transmission-type ESR probe 32,33 with oversized waveguides and a 25 T cryogen-free superconducting magnet 34,35 . Gunn-oscillators, operated at frequencies 220, 270, 330, and 405 GHz, were employed as radiation sources. A hot-electron InSb bolometer cooled down to 4.2 K was used as a detector. Magnetic field was applied along the b axis of the crystal. Experiments were performed at a temperature of 1.9(1) K; the temperature was measured using a calibrated Cernox thermometer. A cylinder-shaped crystal with approximate dimensions of 9 mm in length by 5 mm in diameter was immersed in a Teflon cup filled with Daphne 7474 oil (Idemitsu Kosan Co., Ltd.) as pressure medium. A two-section piston cylinder pressure cell made from NiCrAl (inner cylinder) and CuBe (outer sleeve) has been used. The key feature of the pressure cell is the inner pistons, made of ZrO 2 ceramics; this material has low loss for electromagnetic radiation with frequency up to 800 GHz. The change of the superconducting transition temperature of tin was used to calibrate the applied pressure 36 ; the transition temperature was detected by AC magnetic susceptibility measurements. Applied pressure was calculated using the relation between the load at room temperature and the pressure obtained at around 3 K 32 ; the pressure calibration accuracy is better than ±0.05 GPa. ESR line position (mode B) was measured with accuracy better than ±0.2%. In our experiments, we assume that the accuracies estimating J′, J, and J′/J including all possible error sources, are better than ±1%, ±4%, and ±5%, respectively.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. The source data underlying Figs. 1a, 2a | 4,018 | 2019-03-06T00:00:00.000 | [
"Physics"
] |
The Difference between PC-Based and Immersive Virtual Reality Food Purchase Environments on Useability, Presence, and Physiological Responses
Computer simulations used to study food purchasing behavior can be separated into low immersion virtual environments (LIVE), which use personal computers and standard monitors to display a scene, and high immersion virtual environments (HIVE) which use virtual reality technology such as head-mounted displays to display a scene. These methods may differ in their ability to create feelings of presence or cybersickness that would influence the usefulness of these approaches. In this present study, thirty-one adults experienced a virtual supermarket or fast-food restaurant using a LIVE system or a HIVE system. Feelings of presence and cybersickness were measured using questionnaires or physiological responses (heart rate and electrodermal activity). The participants were also asked to rate their ability to complete the set task. The results of this study indicate that participants reported a higher sense of presence in the HIVE scenes as compared to the LIVE scenes (p < 0.05). The participant’s heart rate and electrodermal activity were significantly higher in the HIVE scene treatment when compared to the LIVE scene (p < 0.05). There was no difference in the participant’s ability to complete tasks in the different scenes. In addition, feelings of cybersickness were not different between the HIVE and LIVE scenes.
Introduction
A poor diet is linked to several chronic diseases, including cardiovascular disease [1], some types of cancer [2], type 2 diabetes [3], and obesity [4].It has been estimated that 11 million premature deaths and 255 million disability-adjusted life years are attributable to dietary risk factors [5].In addition, our food choices have a significant impact on the environment [6].Studies indicate that changing our dietary choices would improve health and reduce environmental degradation [7].Although dietary habits are often thought to be difficult to change, diets are in a state of constant flux and can change markedly within a generation [8].In a typical week, 87% of American households purchase food from a grocery store or supermarket, and 85% acquire food from restaurants [9].Consequently, changing purchasing habits at these locations may help improve health and reduce the impact of diet on the environment.However, identifying strategies that align dietary choices with societal goals while identifying unintended consequences, such as exacerbating existing nutritional inequalities, would be facilitated by the development of new methods to understand consumer purchasing behavior.
Several experimental approaches can be used to understand the effect of the environment on food choices.These include focus groups [10,11], laboratory studies [12], studies that observe consumers in real-life food outlets [13], and studies that use test food outlets or studies using physical simulations of food outlets [14,15].Each of these methods has strengths and weaknesses.Focus groups provide insight into what consumers are thinking, but it is not clear that their stated food choices actually reflect their food choices in real-life settings [16].Laboratory studies offer strong experimental control but do not reflect the environment in which food choices are made, and data from these studies may not accurately predict behavior in real-life settings as several studies report that the environment can influence food purchase decisions [17][18][19].Field studies that observe consumers in real-life food outlets are the gold-standard for studying food choices, as they observe shoppers who are exposed to the full range of environmental factors that may influence their behavior.Crucially, the shoppers' actions are not "zero-stakes" and have real consequences (e.g., they must spend their own money and eat the food they purchase).These studies can also highlight important antecedents to a food purchase (e.g., route taken through the store, time viewing objects, number of foods lifted), which may provide insights into how food purchase decisions are made.However, there are substantial logistical barriers to conducting field studies, as many retailers may be reluctant to allow researchers into stores or restaurants to conduct research.The ability to change pricing, store layout, or shelf placement may also be limited [20].Other drawbacks to field studies include the cost, the time required to collect data, limited experimental control, difficulties collecting physiological data that may provide insights into purchasing behavior, and difficulties independently replicating this study [21].An alternative to field studies is to create physical replications of food outlets to investigate food choices [22].While this, to some extent, would replicate the context in which food choices are made and allow for changes to the food environment, the creation of physical replicas of food outlets requires substantial resources, including space, and may only be available to a small number of researchers, limiting research in this area.Due to the weaknesses of focus groups or laboratory studies and the logistical, cost, and time issues with conducting field studies or testing food outlets/physical simulations of food outlets, the development of alternative approaches to study food choices is required to facilitate innovative approaches to promote food choices that improve health and reduce the environmental impact of the diet.Virtual simulations of grocery stores or restaurants may provide a useful approach to understanding consumer behavior or to refine interventions before they are implemented in real-life settings.
Computer simulations using 3D computer graphics to replicate food outlets are an emerging approach to studying food purchasing behavior.These simulations can be experienced using video walls [22,23], PC Monitors [24,25], immersive Virtual Reality (VR) headsets [26,27], CAVE virtual reality systems [28], or augmented reality [29,30].These approaches can be broadly split into high-immersion virtual environments (HIVE) and low-immersion virtual environments (LIVE).The use of HIVE is particularly intriguing, as VR head-mounted displays (VR-HMD) have become relatively inexpensive and have the potential to create a sense of presence.The sense of presence causes the user to suspend disbelief and believe they are actually in the virtual environment, physically and emotionally reacting to stimuli created by the computer-generated application as if they were in the real world [31].This may be an important benefit when studying food purchasing habits, as studies show that the environment [32][33][34][35] and emotions [36,37] can influence behavior.Consequently, food purchasing behavior in HIVE may more accurately reflect real-life behavior than simulations experienced using LIVE.
The sense of presence can be measured using questionnaires [38] or physiological markers such as heart rate [39], electrodermal activity (EDA), or electroencephalograms [40].A previous study that used a questionnaire found that participants who experienced a HIVE supermarket experienced a greater sense of presence than a LIVE version [26].While questionnaires are a common method to measure presence, they may be subject to response biases and yield inaccurate information [41].Consequently, physiological markers of presence may yield additional evidence regarding feelings of presence in a virtual environment.To date, studies have investigated the effects of food cues experienced using VR on physiological measures such as heart rate, skin conductance [42], or salivation [43,44], but further research is required to determine how individuals respond in virtual food environments.
While HIVE can create a sense of presence, it can also elicit feelings of cybersickness [45].It is believed that cybersickness is due to a perceptual conflict between the visual system (which reports that the user is moving) and the vestibular system (which reports that the user is stationary) [46].The symptoms of cybersickness are not trivial and include nausea, pale skin, cold sweats, vomiting, dizziness, headache, dryness of mouth, disorientation, and fatigue [47].It has been found that up to 80% of immersive VR users experience some cybersickness [48,49], although current-generation virtual reality head-mounted displays may significantly reduce feelings of cybersickness [50].While most users recover within an hour, some effects can last for several hours [45].This has implications for HIVE technology.If participants experience cybersickness, it may influence their "food choices" in a virtual environment.Moreover, it may affect their ability to finish a test session, or they may not return for further test sessions [51].
For virtual environments to be useful, they must be usable by the target study population, and users should find the HIVE methods to be equally as usable as the LIVE methods.A potentially key aspect of usability is how users navigate through the virtual store.In LIVE scenes, a keyboard or joystick can be used to navigate through the scene.In HIVE scenes, there are multiple methods to navigate the scene, including the use of the thumbstick on handheld controllers or using the controllers to 'teleport' (i.e., the user points a laser pointer at the spot they want to move to and presses the controller trigger, and they automatically appear in the new position).The differences in how people move through the store may lead to differences in the products that users view (teleporting may mean that users miss products on the shelves that they 'skip' by).In addition, if interaction with the application menus to 'purchase' foods or obtain information about foods is not intuitive or awkward, the user may choose fewer foods than they would normally select in order to complete this study faster.Again, multiple options are possible, and participants can interact with the menus using a mouse (LIVE scenes) or through a 'laser pointer' (HIVE scenes).In this present study, voice recognition technology was used to interact with menus to investigate another potential option to interact with the application features.
The objective of this study was to determine differences in presence (measured using a questionnaire, heart rate, and EDA), cybersickness (using a questionnaire), and the participant's subjective assessment that they could accomplish a set task (using a questionnaire) when experiencing a LIVE or HIVE supermarket or restaurant.It is hypothesized that there will be increased feelings of subjective presence, heart rate, EDA, and cybersickness when experiencing the HIVE scenes.We also hypothesize that there will be no difference in usability between the LIVE and HIVE scenes.
Participants
Individuals were informed about this study through an email sent to all faculty, students, and staff at Iowa State University or through word of mouth in the local Ames, Iowa, community.Potential participants were informed about this study and, if they remained interested in participating, were asked to sign an informed consent form.After signing the informed consent form, the participant completed a screening questionnaire to confirm their eligibility for this study.If the participant was eligible for this study, they were randomized to a treatment order.Thirty-one participants were recruited subject to the following inclusion criteria: age between 18 and 60 years.Potential participants were excluded if they have a history of motion sickness, experience seizures of any type, have been diagnosed with a seizure disorder, or have an allergy to adhesives.This study was conducted according to the guidelines laid down in the Declaration of Helsinki, and all procedures involving human subjects/patients were approved by the Institutional Review Board (IRB) at Iowa State University (IRB ID number 19-166, date of Approval-13 May 2019).Written informed consent was obtained from all subjects/patients.
Virtual Worlds
For this study, four computer-generated, three-dimensional (3D) scenes were developed.These were: HIVE supermarket (HIVESM), LIVE supermarket (LIVESM), HIVE fast-food restaurant (HIVEFFR), and LIVE fast-food restaurant (LIVEFFR).The scenes were all identical except for the level of immersion and the method used to navigate around the scene.The virtual scenes were created using the Unity game engine (version 2018.4,Unity Technologies, San Francisco, CA, USA).The 3D models used to create the scenes were purchased from Turbosquid (www.turbosquid.com,accessed on 3 December 2023) or the Unity Asset Store (www.assetstore.com,accessed on 3 December 2023).
The supermarket scene simulated a medium-sized, modern supermarket with models of many foods that are commonly available in a United States supermarket (Figure 1).The participants could obtain nutrition information about food products or purchase foods using voice commands.When participants said the name of a product, a menu appeared in front of that product.When this menu was open, if they said "nutrition," a nutrition information panel would appear.If they said "purchase," the item was purchased.The participant heard background sounds of a busy supermarket, including background conversations, announcements made over the supermarket intercom, and the sound of cash registers being operated.Voice recognition used the inbuilt voice recognition features of the Unity game engine.
The restaurant scene simulated a modern-fast food restaurant (Figure 2).The participant interacted with menus on a terminal to select and purchase foods.Similar to the supermarket treatments, the participants used voice commands to select and purchase foods.However, no nutrition information was provided other than calorie information.Background sounds of a busy restaurant were added, which included background conversations, restaurant equipment being operated, and orders being taken.).Written informed consent was obtained from all subjects/patients.
Virtual Worlds
For this study, four computer-generated, three-dimensional (3D) scenes were developed.These were: HIVE supermarket (HIVESM), LIVE supermarket (LIVESM), HIVE fast-food restaurant (HIVEFFR), and LIVE fast-food restaurant (LIVEFFR).The scenes were all identical except for the level of immersion and the method used to navigate around the scene.The virtual scenes were created using the Unity game engine (version 2018.4,Unity Technologies, San Francisco, CA, USA).The 3D models used to create the scenes were purchased from Turbosquid (www.turbosquid.com,accessed on 3 December 2023) or the Unity Asset Store (www.assetstore.com,accessed on 3 December 2023).
The supermarket scene simulated a medium-sized, modern supermarket with models of many foods that are commonly available in a United States supermarket (Figure 1).The participants could obtain nutrition information about food products or purchase foods using voice commands.When participants said the name of a product, a menu appeared in front of that product.When this menu was open, if they said "nutrition," a nutrition information panel would appear.If they said "purchase," the item was purchased.The participant heard background sounds of a busy supermarket, including background conversations, announcements made over the supermarket intercom, and the sound of cash registers being operated.Voice recognition used the inbuilt voice recognition features of the Unity game engine.The restaurant scene simulated a modern-fast food restaurant (Figure 2).The participant interacted with menus on a terminal to select and purchase foods.Similar to the supermarket treatments, the participants used voice commands to select and purchase foods.However, no nutrition information was provided other than calorie information.Background sounds of a busy restaurant were added, which included background conversations, restaurant equipment being operated, and orders being taken.
Figure 1.The virtual supermarket.A menu for selected foods could be selected that allowed the participant to obtain nutrition information about the product.
For the HIVESM and HIVEFFR, the participant could move around the virtual worlds using the hand-held wands that accompany the HTC Vive.The participant moved via 'teleportation'.In this method, when the participant places their finger on the trackpad, it emits a laser beam from a graphical representation of the wand in the VR space.To move, the participant points to the place they want to move to and then presses the trackpad button to move there.For the LIVESMPC and LIVEFFR, participants navigated throughout the store or restaurant using a first-person avatar that was controlled using a Logitech Extreme 3D Pro joystick (Logitech, CA, USA).Moving the joystick forward/back/left/right would move the avatar in that direction.A 'top-hat' joystick (situated on top of the main joystick) was used to simulate head movement so different aspects of the store or restaurant could be viewed.However, this movement was constrained to 90 • to the left and right or up and down so that the participant could not rotate the 'head' through a full 360 • range of motion.
A PC (Dell Computers) with the following specifications was used for all aspects of this study: an Intel i7 processor, 16 GB of RAM, an Nvidia GTX1070 graphics card, and Logitech Z200 Stereo speakers (Logitec, CA, USA) were used to produce the restaurant or supermarket sounds.For the PC scenes, participants viewed the scenes on a 21-inch Dell monitor that had a resolution of 1024 × 800.For the VR treatments, the same scenes were experienced while wearing an HTC Vive head-mounted display (VR-HMD; HTC, Taoyuan City, Taiwan).
Questionnaires
At the beginning of the first test session, participants completed a questionnaire that collected demographics, educational background, food purchasing habits, attitudes toward food, understanding of computer technology, experience with playing computer games, familiarity with virtual reality, and confidence in navigating computer simulation information.Immediately after leaving the simulation, participants completed the Slater-Usoh-Steed (SUS) presence questionnaire, which captures responses on a 7-point Likert scale [52].The participant was also asked to rate their feelings of cybersickness and how well they thought they accomplished the task given to them on a 7-point Likert scale.The questionnaires were administered using Qualtrics software (Qualtrics 06,2019Version adde, Provo, UT, USA), and responses were collected using a personal computer.
Foods 2024, 13, x FOR PEER REVIEW 6 of 14 Figure 2. The virtual fast-food restaurant.A menu for selected foods could be selected that allowed the participant to obtain nutrition information about the product.
For the HIVESM and HIVEFFR, the participant could move around the virtual worlds using the hand-held wands that accompany the HTC Vive.The participant moved via 'teleportation'.In this method, when the participant places their finger on the trackpad, it emits a laser beam from a graphical representation of the wand in the VR space.To move, the participant points to the place they want to move to and then presses the trackpad button to move there.For the LIVESMPC and LIVEFFR, participants navigated throughout the store or restaurant using a first-person avatar that was controlled using a Figure 2. The virtual fast-food restaurant.A menu for selected foods could be selected that allowed the participant to obtain nutrition information about the product.
Physiological Measures
The skin area was cleansed with an alcohol swab before surface electrodes were attached to the right forearm, right index finger, right middle finger, and left and right inner ankles to capture heart rate and EDA data.Medical-grade tape was used to ensure the electrodes were secure throughout the testing session.The surface electrodes were connected to a Biopac MP36R (BIOPAC Systems Inc., Goleta, CA, USA).AcqKnowledge (v5.0) software (BIOPAC Systems Inc., Goleta, CA, USA) were used to extract data features.
Procedure
Participants reported to the laboratory at a time that was convenient to them between 10 a.m. and 4 p.m.They were required to report to the laboratory at the same time for each of the test sessions and were asked not to eat for at least two hours before each test session.First, the surface electrodes were attached, and the participant was asked to sit quietly for ten minutes so that baseline physiological measurements could be collected.Then, for the HIVE treatments, the VR headset was placed on the participant's head, and the relevant scene was shown.For the LIVE treatments, the relevant scene was shown on the PC monitor.The participant was provided with full instructions about movement through the scene and the voice commands used to interact with menus.While in the restaurant scene, the participant was asked to use the menus to select food items and 'purchase' the chosen item.They were asked to 'purchase' a meal containing a sandwich, a side, and a beverage.Then, they were asked to move around the restaurant for at least five minutes.When exploring the supermarket's aisles, the participant was asked to locate the cereal and bread sections and use the menu selections to read the nutritional information of two specific products (white bread and Cheerios cereal).A researcher was present to confirm the participant accomplished these tasks by viewing the participant's actions on a PC monitor.The nutrition information presented were based on USDA (United States Department of Agriculture) data, and prices reflected local food outlet values (at the time of this study) when creating the programs.Nutrition facts labels were constructed following the current United States FDA (Food and Drug Administration) labeling guidelines.At the end of viewing the supermarket or restaurant scene, the VR HMD (in the VR scenes) was removed, and the participant completed the SUS, cybersickness, and usability questionnaires.
Statistical Analysis
Means and standard error of means were calculated for all participant responses and study variables.Differences between treatments were determined using a repeated measure ANOVA, with the condition as a fixed effect variable and the participant as a random effect variable.Post hoc analysis was conducted using Tukey's honest significance difference (HSD) test.Statistical significance was set at p < 0.05 to determine the effect of the condition on response.All statistical analyses were completed using JMP Pro 15.0 software (SAS, Cary, NC, USA).
Demographics
This study group was predominantly female (68% female/32% male) and was in the 18-25 year age group.Participants had a self-reported body mass index of 24.0 (SD = 4.2, range 18.5 to 37.8).Most had some college experience or a 4-year degree and used a computer daily (52%).However, the majority (77%) "never" played computer games, and 58% had not experienced VR before their participation in this study.Most participants visited restaurants (52%) and did their grocery shopping (84%) "once per week."Almost half (48%) of participants reported they were "always" responsible for buying groceries in their household.
Questionnaires
Each of the questions from the presence questionnaire was analyzed individually (Table 1).For the question "rate your sense of being in the SM/FFR scene," there was a statistically significant effect of condition on response (f(3,90) = 37.8, p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of being in the HIVE supermarket/fast food restaurant (p < 0.05).For the question "to what extent were there times during the experience when the SM/FFR was the reality for you?"There was a statistically significant effect of condition on response (f(3,90) = 44.5, p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of feeling that the supermarket/fast food restaurant were the reality in the HIVE condition (p < 0.05).For the question "Was the SM/FFR more like images that you saw OR more like somewhere that you visited?" there was a statistically significant effect of condition on response (f(3,90) = 18.9, p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of somewhere that they visited in the HIVE conditions (p < 0.05).For the question "Which was strongest, your sense of being in the SM/FFR or of being elsewhere?"There was a statistically significant effect of condition on response (f(3,90) = 23.4,p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of being in the supermarket/fast food restaurant when in the HIVE condition (p < 0.05).For the question "I think of the SM/FFR as a place in a way similar to other places that I've been today," There was a statistically significant effect of condition on response (f(3,90) = 8.7, p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of feeling like they had been in a similar place today when in the HIVE condition (p < 0.05).For the question "Did you often think to yourself that you were actually in the SM/FFR?"There was a statistically significant effect of condition on response (f(3,90) = 17.6, p < 0.0001).Post hoc analysis indicated that the participants had a greater sense of thinking they were in a supermarket/fast food restaurant when in the HIVE condition (p < 0.05).For the question "Rate the extent to which you were aware of background sounds in the laboratory where this was actually taking place," there was no statistically significant effect of condition on response (f(3,90) = 2.6, p = 0.058).For the question "How dizzy, sick, or nauseous did you feel during or as a result of the experience?"There was no statistical difference between conditions, and only minimal cybersickness was reported after participating in each treatment session (p > 0.05).All participants successfully completed the tasks, and for the question "Overall, how well do you think that you achieved your task?"There was no statistical difference between treatments (p > 0.05).
Physiological Measurements
Table 2 provides data regarding the physiological measurements.There was a statistically significant effect of condition on change in heart rate (f(3,90) = 21.4,p < 0.0001).Post hoc analysis indicated that the participants heart rate was higher when they were in the HIVE scenes (p < 0.05).There was also a statistically significant effect of condition on change in electrodermal activity (f(3,90) = 5.1076, p < 0.0023).Post hoc analysis indicated that the participants' EDA increased when they were in the HIVE scenes (p < 0.05).
Time in the Scenes
Participants spent 5.8 min (SEM = 0.2) in the LIVEFFR scene, 6.6 min (SEM = 0.3) in the HIVEFFR scene, 6.0 min (SEM = 0.23) in the LIVESM scene, and 7.0 min in the HIVESM scene.There was a statistically significant effect of condition on time spent in the scenes (f(3,90) = 6.8237, p = 0.003).Post hoc analysis found that individuals spent longer in the HIVESM scene than in the LIVESM and LIVEFFR scenes (p < 0.05).
Table 1.Participant responses to questions related to sense of presence following participation in each treatment session.A significantly higher sense of presence is reflected in the VR scenes as compared to the traditional PC monitor.N = 31 for all scenes.Results are Mean (SEM).Results with a different superscript are significantly different (p < 0.05).
Discussion
In this present study, we hypothesized that there would be increased feelings of subjective presence, heart rate, EDA, and cybersickness when experiencing the HIVE scenes.We also hypothesized that there would be no difference in usability between the LIVE and HIVE scenes.The HIVE scenes did increase feelings of subjective presence, heart rate, and EDA, and this hypothesis was accepted.However, participants did not report increased cybersickness when in the HIVE scenes, and this hypothesis was rejected.We did not find any differences in participants' ratings of the usability of the scenes.These data add to the growing literature suggesting that virtual environments may be a useful approach to understanding food purchasing behavior.
In this present study, the participants were predominantly young, educated, had little experience with VR, and did not regularly play video games.However, most use computers in their daily lives.This study group was relatively homogenous and limited in size and did not allow for questions about previous experience with VR or computer games, age, or gender on the outcome measures to be investigated.It is possible that age will have an effect on a person's experience using VR.For instance, younger people likely spend larger amounts of time in virtual worlds, which may change their feelings of presence when in a virtual supermarket or restaurant.In addition, their familiarity with virtual worlds may help them navigate through the virtual worlds using the user interface.
The HIVE applications created a greater sense of subjective presence among the users.When designing effective VR scenes, it is essential to create a sense of presence so that users suspend disbelief, believe they are actually present in the VR environment, and respond as they would in equivalent real-life situations [53,54].While there is no generally accepted measure of 'presence', it has been proposed that questionnaires are the preferred method [39].However, there are several issues with using questionnaires to determine presence.First, participants may respond to questions in idiosyncratic ways.In one study, participants were asked the question, "Please rate your sense of being in the office space".Participants in a real office space only rated their sense of being in the office as four on a seven-point scale [52].Presumably, the participants recognized they were inhabiting reality but were possibly comparing the office to their model of what an office should look like, and the low score reflected the discrepancy.Relevant to this present study, it has been suggested that using questionnaires across different types of environments (e.g., immersive VR v desktop PC) has limited utility [52], and these data should be interpreted cautiously.
A major limitation when using questionnaires is that participants may guess the purpose of this study, especially as it may be difficult to blind the participants or researchers and provide responses to questions that they believe the researchers are looking for (demand bias).Physiological markers of presence that provide an objective measure of presence may overcome this limitation.In this present study, heart rate and EDA were measured, with participants exhibiting higher measures in the VR scenes.It is likely that the most useful physiological markers of presence in studies of restaurants mirror the physiological responses observed when an individual is in a real-life restaurant or supermarket [53].These may include measures of arousal such as heart rate, heart rate variability, or electrodermal activity [53].In addition, the effect of HIVE on endocrine and metabolic markers may also be useful.When exposed to food cues, there are a number of physiological responses collectively termed the cephalic phase response (CPR).Studies suggest that CPR is related to the metabolic response to foods or meal size [55][56][57].Little is currently known about how the environment influences CPR, and IVR may provide an approach to studying this phenomenon.
The future development of HIVE food outlets should focus on determining the factors that increase the sense of presence.Primarily, the use of equipment that promotes presence by having high resolution and a good field of view is important [58,59].Improving the fidelity of food models, menus, interactions, odors, sounds, and haptic feedback would likely increase the realism of the experience and elicit behaviors that better reflect real-life situations.However, it is not clear that increasing realism will increase presence in all situations [59].In this present study, several participants provided anecdotal reports that in the HIVE supermarket treatment, they felt cold when moving through the freezer section of the store.Moreover, they noticed incongruences between the HIVE supermarket/restaurant and real-life that did not meet their expectations (e.g., the lack of soap by the sink in the fastfood restaurant kitchen).They did not mention these after being in the LIVE simulations.Care should be taken when designing HIVE environments so that they match participants' experiences of real-life settings, as incongruences may reduce their presence.
In the present study, participants in the IVR scenes did not report feelings of cybersickness.However, participants spent an average of 7.0 min in the SMVR scene and 6.6 min in the FFRVR.One study found that 61% of users experienced cybersickness during a 20 min exposure to IVR [60].Most symptoms were reported towards the end of the 20 min period.Consequently, this study may have been of insufficient duration to elicit feelings of cybersickness.In HIVE restaurants, the exposure time may be short as it is relatively quick to order a meal.However, for complex tasks in a HIVE supermarket (e.g., buying a week worth of groceries), it may take 20-30 min.The ability of individuals with different demographic characteristics to remain in HIVE for up to 30 min to complete complex tasks requires investigation.In a previous study of HIVE, between 4 and 16% of people terminated their participation before the allotted time was over [61].If these results hold in HIVE supermarkets, then this would seriously curtail the usefulness of IVR supermarkets and potentially restrict it to more focused questions.
The method of locomotion may reduce feelings of cybersickness.Bodytrackers can be used so that when a participant walks in 'real-life' they move in the virtual scene.This may reduce the congruency between the information that the visual system is receiving and the information that the vestibular system is receiving, reducing feelings of cybersickness.However, this approach may require substantial amounts of space, depending on the size of the scene.Participants may also feel uncomfortable walking around a space while wearing a head-mounted display.Alternative methods include using a handheld controller to move through the scene (e.g., the Oculus Rift handheld controller or a gamepad) or teleportation, where the participant points using a controller to a point they want to move to and then presses a button to instantly move there.These methods require less space than using bodytrackers that measure the movement of someone walking in a room.Teleportation may cause lower sensations of cybersickness than steering methods [62].
Participants were able to complete the tasks asked of them and self-reported that they completed the tasks adequately in each of the treatments.It is crucial that applications are usable for a wide range of individuals, and further research is required to determine whether these applications can be used adequately by a wider cross-section of society.In particular, individuals who do not commonly use computers, play video games, or have been exposed to virtual reality.
Limitations
It is important to note that this study has several limitations.First, this study is exploratory in nature and uses a relatively small sample size.Consequently, results from this study require confirmation by larger studies.Second, further studies are required to determine the effect of age, experience with computers or IVR, educational background, presence (measured using questionnaires and physiological markers), cybersickness, and usability in virtual supermarkets/restaurants.Third, the participants spent a relatively short amount of time in IVR.Further research is required to determine if cybersickness symptoms appear after longer periods of time in IVR or if tiredness increases.Fourth, the effect of repeated exposure to IVR on presence and cybersickness requires further investigation (e.g., is the sense of presence or cybersickness reduced with repeated exposure?).
Figure 1 .
Figure1.The virtual supermarket.A menu for selected foods could be selected that allowed the participant to obtain nutrition information about the product.
Board (IRB) at Iowa State University (IRB ID number 19-166, date of Approval-13 May 2019
Table 2 .
Participant physiological measurements during treatment.Changes in heart rate and electrodermal activity were significantly higher in the VR treatments.N = 31 for all scenes.Results are Mean (SEM).Results with a different superscript are significantly different (p < 0.05). | 7,618.8 | 2024-01-01T00:00:00.000 | [
"Computer Science"
] |
Variants of the FADS1 FADS2 Gene Cluster, Blood Levels of Polyunsaturated Fatty Acids and Eczema in Children within the First 2 Years of Life
Background Association of genetic-variants in the FADS1-FADS2-gene-cluster with fatty-acid-composition in blood of adult-populations is well established. We analyze this genetic-association in two children-cohort-studies. In addition, the association between variants in the FADS-gene-cluster and blood-fatty-acid-composition with eczema was studied. Methods and Principal Findings Data of two population-based-birth-cohorts in the Netherlands and Germany (KOALA, LISA) were pooled (n = 879) and analyzed by (logistic) regression regarding the mutual influence of single-nucleotide-polymorphisms (SNPs) in the FADS-gene-cluster (rs174545, rs174546, rs174556, rs174561, rs3834458), on polyunsaturated fatty acids (PUFA) in blood and parent-reported eczema until the age of 2 years. All SNPs were highly significantly associated with all PUFAs except for alpha-linolenic-acid and eicosapentaenoic-acid, also after correction for multiple-testing. All tested SNPs showed associations with eczema in the LISA-study, but not in the KOALA-study. None of the PUFAs was significantly associated with eczema neither in the pooled nor in the analyses stratified by study-cohort. Conclusions and Significance PUFA-composition in young children's blood is under strong control of the FADS-gene-cluster. Inconsistent results were found for a link between these genetic-variants with eczema. PUFA in blood was not associated with eczema. Thus the hypothesis of an inflammatory-link between PUFA and eczema by the metabolic-pathway of LC-PUFAs as precursors for inflammatory prostaglandins and leukotrienes could not be confirmed by these data.
Introduction
It has been well established that genetic variants in the fatty acid desaturase genes (FADS1 and FADS2) associate with the fatty acid composition in adult populations [1]. Several studies have shown that the D-5 and the D-6 desaturase enzymes are involved in fatty acid metabolism in adults and that these enzymes are genetically regulated by variants of the FADS1 and FADS2 genes, respectively [2][3][4][5][6][7][8].
Empirical and theoretical evidence exist that fatty acid metabolism may be involved in atopic eczema [9][10][11][12][13][14]. Some studies found, that dietary intake of certain fatty acids can contribute to the development of allergic diseases. Kompauer et al showed a positive association between hay fever and arachidonic acid (AA) intake, and allergic sensitisation and oleic acid intake in German adults [15].
Long chain polyunsaturated fatty acids (LC-PUFA) can influence inflammatory responses, as they are precursors of eicosanoids and docosanoids [16]. The link between PUFA and inflammatory processes are the eicosanoids with arachidonic acid (AA) as their main precursor. Eicosanoids derived from AA (e.g. leukotrienes LTB4, prostaglandins PGE2, PGI2 or thromboxanes TXA2) have mainly pro-inflammatory effects [16]. According to findings of several studies a defect in enzyme activity of D-6 desaturase, encoded by the FADS2 gene, leads to enhanced blood levels of the n-6 and n-3 parent fatty acids linoleic (LA) and alphalinolenic acid (ALA), respectively, and decreased levels of AA and eicosapentaenoic acid (EPA), whereas docosahexaenoic acid (DHA) levels are not influenced [5,6]. Schaeffer et al. showed that the variability of serum fatty acid levels explained by genetic variants in the FADS1 FADS2 gene cluster is highest for AA with 28% [6]. Thus, variants in the FADS1 FADS2 gene cluster may be indirectly associated with inflammatory processes via their influence on endogenous LC-PUFA production, particularly AA production. Figure 1 show these metabolic pathways of n-6 and n-3 fatty acids and pathways of production of pro-inflammatory and less inflammatory eicosanoids and anti-inflammatory docosanoids schematically [12].
No study has empirically confirmed the genetic link between the variants in the FADS1 FADS2 gene cluster with polyunsaturated fatty acids (PUFA) in children. Schaeffer et al found an association of rarer FADS haplotypes with a reduced eczema risk in adults [6], but so far, no study has been published relating both genotypes and fatty acids to eczema development in children. Therefore, we analyzed this genetic association in a children cohort study pooled from two population based birth cohorts and investigate the [11,12,40]. LA = linoleic acid, GLA = gamma linolenic acid, DGLA = Dihomo-gammalinolenic acid, AA = arachidonoc acid, A = adrenic acid, ALA = alpha-limolenic acid, EPA, = eicospentaenoic acid, DPA = docosapentaenoic acid, DHA = docosahexaenoic acid; LTs = leukotrienes, PGs = prostaglandins, TXs = thromboxanes, RVs = Resolvins. doi:10.1371/journal.pone.0013261.g001 association between variants in the FADS gene cluster, blood fatty acid composition and eczema up to the age of two years.
Ethics statement
Approval by the respective local Ethics Committees (Maastricht University/University Hospital of Maastricht, Bavarian General Medical Council) and written informed consent from participants' families (parents) were obtained in both KOALA and LISA studies.
Study design and population
The KOALA Birth Cohort Study (''Kind, Ouders en gezondheid: Aandacht voor Leefstijl en Aanleg'') is a prospective cohort study of 2834 mother-infant pairs in the Netherlands. Aim of the KOALA study was to investigate factors that influence the clinical expression of atopic disease with a main focus on lifestyle factors (e.g. anthroposophy, dietary habits, breastfeeding, intestinal micro flora, and gene-environment-interactions). Enrollment started in October 2000. Details of the study design have been described elsewhere [17,18].
The LISA study (''Influences of Lifestyle related Factors on the Immune System and the Development of Allergies in Childhood'') is an ongoing population-based birth cohort study of unselected 3097 newborns. The study was designed to assess 'Influences of Lifestyle related Factors on the Immune System and the Development of Allergies in Childhood'. Between November 1997 and January 1999, n = 3097 healthy full-term newborns (gestational age$37 weeks) were recruited from 14 obstetrical clinics in Munich (n = 1467), Leipzig (n = 976), Wesel (n = 306), and Bad Honnef (n = 348). Details on study design are published elsewhere [19,20].
The studied analysis population with information on genotypes, fatty acids and eczema at 2 years of age comprised 546 children in the KOALA birth cohort study and 333 in the LISA birth cohort study.
For details on both studies (see Supporting Information Appendix S1).
Fatty acid analysis
In the KOALA study blood was collected in EDTA-tubes during a home visit to the child around age 2 years, by trained nurses according to a standardized protocol. The EDTA-plasma was used for the analysis of the fatty acid status and was, after centrifugation, stored in cryovials at 280uC. The EDTA-plasma was deproteinated and the precipitate was removed by centrifugation. Then the sample was applied to an aminopropyl-solidphase column to selectively elute the phospholipid fraction [21,22].
In the LISA study venous blood samples were collected in serum-separator-tubes. Samples were centrifuged and serum was frozen in plastic vials and stored at 280uC until analysis. The analysis of plasma-glycerophospholipids composition was performed by a sensitive and precise high-throughput method as described recently [23].
For details (see Supporting Information Appendix S2).
Genotyping
In KOALA, genomic DNA was extracted from buccal swabs using standard methods as described previously [24]. In LISA, genomic DNA was extracted from EDTA blood.
For details (see Supporting Information Appendix S3).
In the LISA study blood samples were collected during physical examination of the infant at age 2 years and analysed for total and specific IgE using the RAST-CAP-FEIA-system (Pharmacia, Freiburg, Germany) as previously described [30]. For details (see Supporting Information Appendix S4).
Definition of outcome variable parental reported eczema
In both the KOALA and LISA study the definition of ''parental reported eczema'' is based on the questionnaire-reported occurrence of ''itchy rash that was coming and going'' at any time within the first two years of life [18,31].
For details (see Supporting Information Appendix S5).
Statistical analysis
Allele frequencies, Fisher's exact test of Hardy-Weinberg-Equilibrium (HWE), the linkage disequilibrium (LD) tests Lewontin's D' and pairwise-squared correlations r 2 were calculated for the study population and for both studies separately.
Single SNP linear regression analyses for the relation between FADS variants and the nine continuous outcome variables (fatty acids) were conducted applying an additive coded model. P-Values were corrected for multiple testing by Bonferroni correction.
Logistic regression was applied to evaluate the effect of each single SNP and of each fatty acid separately on the dichotomous coded outcome ''parental reported eczema during the first two years of life''. No corrections for multiple testing were applied in logistic regression.
For details (see Supporting Information Appendix S6).
Results
Study characteristics for both study populations separately and combined are listed in Table 1.
There are some differences between both studies in the percentages of boys, high maternal education, family history of asthma or allergy, proportion of maternal smoking during pregnancy and exclusive breastfeeding. Mean percentage contributions of fatty acids to plasma phospholipids in the KOALA cohort and to plasma glycerophospholipids in the LISA cohort are similar, except for an almost twofold higher percentage value of EPA in the KOALA study, which might be related to the different population studied with a potentially higher fish consumption in the Netherlands, as well as the different analytical method used.
Information regarding position, possible functional region and genotyping frequencies for the five analyzed SNPs of the FADS1 FADS2 gene cluster (see Supporting Information Table S1). Minimum P-value of Fisher's exact test for violation of Hardy-Weinberg-Equilibrium (HWE) for any of the five SNPs was 0.48 (rs174546) in the whole study population and 0.62 and 0.61 for the KOALA and LISA study populations separately.
Lewontin's D' and pairwise-squared correlations r 2 for the KOALA study and for the LISA study are depicted in Figure 2.
Single SNP associations with fatty acids
Mean levels and standard deviations for each fatty acid by genotype of each SNP (rs174545, rs174546, rs174556, rs174561, rs3834458) for the combined study population and for the KOALA and LISA studies separately are listed in in the Online Repository (see Supporting Information Table S2, Table S3 and Table S4). Note that mean levels of GLA, ALA, EPA and DHA are naturally logged means to account for the severely skewed distributions of these fatty acids.
Single SNP regression analyses of each additive coded SNP (0/1/2) with each fatty acid in the combined study population of the KOALA and LISA studies at age 2 years revealed that all SNPs are highly significantly associated with all fatty acids except for ALA and EPA (natural log scale, Table 2). This is true even though all P-values have been conservatively corrected for multiple testing according to Bonferroni's method. With respect to ALA, corrected significance at the 5% level was only reached for rs174545 and rs3834458 but not for the other SNPs. Regarding EPA, regression coefficients were in the same direction and the same order of magnitude as for DPA and DHA, but corrected P-values were only between 0.05 and 0.09. Separate analyses for both study populations revealed that in the LISA study associations of the SNPs were significant only with LA, GLA, DGLA, AA and EPA after correction for multiple testing (see Supporting Information Table S5 and Table S6). In the KOALA study all SNPs showed a significant association with all n-6 and n-3 PUFAs except EPA. However, regression coefficients of SNPs were in the same direction for all PUFAs so the lack of significance regarding n-3 PUFA in the LISA study may just reflect a loss of power due to the smaller sample size of the LISA study compared to that of the KOALA study.
Associations of single SNPs with parental reported eczema
Of all analyzed indicator coded SNPs of the FADS1 FADS2 gene cluster none showed a statistically significant association with the dichotomous outcome parental reported eczema in the first 2 years of life (Table 3) in both unadjusted and adjusted analyses (see methods). Odds ratios (OR) for carriers of one or two minor alleles in comparison to non-carriers range from 1.3 to 1.5 in unadjusted, and 1.3 to 1.4 in adjusted analyses, respectively. However, 95%confidence limits always include the null-effect of an OR of 1.0. Moreover, tests for multiplicative allelic trend (genotypes coded 0, 1, 2) also showed that none was significant at the 5% level with, even though no correction for multiple testing was applied.
Separate analyses for both study populations revealed that in the LISA-study all SNPs of the FADS1 FADS2 gene cluster showed significant associations with the dichotomous outcome parental reported eczema in the first 2 years of life in both unadjusted and adjusted analyses, with ORs of about 2 and 4 for heterzygous and homozygous minor allele carriers, which is in good agreement with multiplicative increase of risks (confirmed by P-values for multiplicative trend between 0.003 and 0.005 without correction for multiple testing (see Supporting Information Table S7). By contrast, no associations were found within the KOALA study, where all ORs were between 0.9 and 1.2, whereas upper boundaries of the 95% confidence intervals were below 2.0.
No significant association of any analyzed SNP of the FADS gene with total IgE level or specific IgE at the age of one or two years of the child could be established, neither in the KOALA nor in the LISA study (results not presented).
Associations of fatty acids with parental reported eczema
No single PUFA was significantly associated with parental reported eczema of the child in the first 2 years of life after adjustment (see methods), neither in analyses of the combined study population (Table 4), nor in separate analyses for the KOALA and LISA studies (see Supporting Information Table S8).
Discussion
This is the first study presenting associations between the genetic variants in the FADS1 FADS2 gene cluster with fatty acid composition in a population based sample of children.
We found that all five analyzed variants of the FADS1 FADS2 gene cluster are associated with polyunsaturated fatty acids LA, GLA, DGLA, AA, A, ALA, EPA, DPA and DHA, in particular with AA. Except for the n-3 fatty acids ALA and EPA these associations are highly significant even after conservative Bonferroni correction for multiple testing. For most PUFAs this is in line with previous reports showing these associations in adult populations [4][5][6][7][8]. However, in contrast to previous studies in adults, all SNPs in the present study in children are also highly significantly associated with DHA in one of the study populations (KOALA study). In the same population, Moltó et al found a relation between DHA in blood of the KOALA mothers during pregnancy and the mothers' FADS variants [35].
Our study confirms previous reports that carriers of the minor alleles showed higher levels of n-6 precursor fatty acids LA, DGLA and n-3 precursor fatty acids ALA and decreased product levels of the n-6 fatty acids GLA, AA, A and n-3 fatty acids EPA, DPA and DHA. Thus carriers of the minor alleles show enhanced desaturase substrate levels (substrate accumulation) and decreased desaturase product levels indicative of a lower desaturase activity, in agreement with previous studies [5,6]. Thus, our results confirm that the FADS1 FADS2 gene cluster modulates the PUFA metabolism and demonstrate that this is also the case in children at the age of 2 years. A functional basis for such a regulation was recently shown by Lattka et al. in a study investigating the FADS2 variants rs3834458 and rs968567 [36]. According to this study both SNPs are potential promoter polymorphisms and located in a region important for transcription regulation. The minor alleles of SNP rs968567, but not rs3834458 (studied also in our study), showed a statistical significant increased effect on promoter activity and binding activity to two protein complexes activating the transcription factor ELK1 in that study. However, one of the rare other functional studies on FADS2 found a decreased promoter activity for rs3834458, suggesting the need for many more functional studies [37]. A previous study in adults found protective associations of carriers of minor alleles of the FADS1 FADS2 gene cluster with allergic rhinitis and atopic eczema [6]. In our analyses we could confirm such an association between FADS variants and eczema only for the German study population, but clearly not within the larger sample of children of the Netherlands. Whether these inconsistent results suggest that in children less common variants of the FADS1 FADS2 gene cluster are actually related to the development of eczema within the first 2 years of life requires further investigations with further independent study populations.
In contrast to some previous studies summarized in several reviews [9][10][11], we found no evidence for a direct link between the analyzed blood PUFA levels and eczema. This was true for analyses based on the combined study population as well as for separate analyses of the KOALA and LISA cohorts. Apparently, the development of disease cannot be simplified to one underlying pathway on fatty acid metabolism. On the other hand, larger sample sizes and analyses in further study populations may be necessary to draw any final conclusion on the role of fatty acid composition in blood and eczema. Also, studies of gene-diet interactions may be needed to resolve these inconsistencies, since populations vary widely in the intake n-3 PUFAs from fish products.
Strengths and Limitations
With data on variants of the FADS1 FADS2 gene cluster, PUFA and information on eczema status within the first 2 years of life for more than 800 children this is a large study despite the potential lack of power to detect small effects regarding eczema.
Genotyping for both study populations (KOALA, LISA) were done in the same lab at the Helmholtz Center Munich.
Fatty acids were derived from phospholipids in the KOALA study and from serum glycerophospholipids in the LISA study and [23]. Indeed, the fatty acid composition data was quite similar for both study populations, except for EPA, which might reflect different dietary habits with higher consumption levels of fish, which is the prime dietary source of EPA, in the Netherlands than in Bavaria (cf. Table 1).
We used a similar definition of the outcome ''parental reported eczema of child within the first 2 years of life'', but slightly different wording of the questions and slightly different timing of questionnaires, covering the entire first two years of life in the two cohorts. Despite these similarities we found more than double the percentage in parental reported eczema in the children of the KOALA study (30.6%) than in the sample of the German LISA study (14.1%), even though in KOALA parents the prevalence of asthma and allergies was lower (57.0%) than in LISA parents (68.7%). This raises the question of whether lifestyle differences are responsible for the marked difference in % eczema in children in the two cohorts. There are two additional elements which further press this question: First, the FADS1 and FADS2 polymorphism prevalence is remarkably similar in the two populations, suggesting similar genetic background in the two populations. Second, in the LISA study, 71% of children are exclusively breastfed vs. only 50% in the KOALA study. To compare, the reported prevalence on eczema defined by symptoms in young children in two other birth cohort studies are 25% in children of age 4 years in the Netherlands and 16% for the first 2 years of life years in Germany [38,39]. However, we do not think that any differences in eczema between the LISA and KOALA study affected the results to an appreciable extent, since we conducted, in addition to unadjusted analyses, also analyses adjusted for study cohort. The higher prevalence of eczema in KOALA could point to a higher proportion of non-atopic eczema, and if the FADS effect would be confined to atopic (IgE-mediated) eczema, it could have been diluted by non-atopic cases in the KOALA study. However, the absence of a FADS effect on total and specific IgE in both cohorts makes this explanation unlikely. At present we have no convincing explanation, why we found a significant association between variants of the FADS gene and eczema only in the German LISA study, but not in the Netherland's KOALA study. Since Moltó et al found that the DHA deficit in the homozygous minor allele carriers could be overcome by intake of fish at the recommended level of 2 portions a week [35] we speculate that the difference between LISA and KOALA may reflect a lower intake of n-3 long chain PUFAs in LISA, as a consequence of which the genetic effect is manifest in LISA but not in KOALA. Based on this, we recommend that further studies are done in populations with low n-3 PUFA intake.
In conclusion, this is the first study that confirms in children of age 2 years the previously found associations of genetic variants in the FADS1 FADS2 gene cluster with fatty acid composition in serum phospholipids or glycerophospholipids and that also analyzed the potential influence of FADS1 FADS2 genotypes and PUFAs to eczema.
Variants of the FADS1 FADS2 gene cluster clearly do regulate the metabolism of PUFA in young children. Inconsistent results were found for a link between these genetic variants with eczema. In the German LISA study all SNPs were significantly associated with eczema. In the Netherland's KOALA study this was clearly not the case. In both study populations PUFA was not associated with eczema. Thus the hypothesis of an inflammatory link between PUFA and eczema by the metabolic pathway of LC-PUFAs as precursors for inflammatory prostaglandins and leukotrienes could not be confirmed by these data. This would suggest either that this pathway is not or only marginally involved in eczema development, that other risk factors for early childhood eczema may be more important, or that eczema may have heterogeneous etiology with only a small segment of the population susceptible for effects of endogenous fatty acids metabolism or gene-diet-interaction; or a combination of these arguments.
Supporting Information
Appendix S1 Study design and population Found at: doi: 10 | 4,965.8 | 2010-10-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
How to Effectively Reduce Honey Adulteration in China: An Analysis Based on Evolutionary Game Theory
Apiculture has been greatly developed in recent years in China. Beekeeping cooperatives and honey manufacturing enterprises have increased rapidly. As a result, a variety of honey products have entered the market, adding vitality to the food economy; however, the adulteration of honey products is on the rise in China. Previous attempts to control the adulteration of honey products mostly relied on technical, product-specific measures, and there was a lack of modeling research to guide the supervision of the honey product industry. In order to help local governments to better control the adulteration of honey products from a management perspective, this paper establishes an evolutionary game model composed of beekeeping cooperatives, honey product enterprises, and local governments. Through stability analysis and model simulation, we found that local government subsidies to cooperatives have little impact on the game system. Local government penalties to cooperatives and price adjustments of unadulterated raw honey by cooperatives are effective management tools to reduce the adulteration behavior of cooperatives. Local government penalties for enterprises are an effective management tool to reduce the adulteration behavior of enterprises. This research provides useful information for government agencies to design appropriate policies/business modes so as to promote sustainability and the healthy development of the honey product industry in China.
Introduction
As an important part of modern agriculture, apiculture is a clean method of production that provides large amounts of nutrient-rich honey products with high economic value for society. Honey products have become indispensable food resources in people's daily lives. China is the largest beekeeping country in the world, and there are more than 300,000 farmers engaging in beekeeping [1]. The bee farmers in China have raised more than 9 million colonies of Apis mellifera [1]. In addition to Apis mellifera, the breeding scale and honey production of the native honeybee Apis cerana are also very impressive. According to the literature, the number of Apis cerana raised in China accounts for about 1/3 of the total number of honeybee colonies [2]. In China's remote agricultural areas, the development of apiculture plays an increasingly important role in reducing natural environmental damage and improving the living conditions of farmers [3]. In areas where bee farmers are concentrated, the local governments help them to set up beekeeping cooperatives. The establishment of beekeeping cooperatives promotes the sale of raw honey to enterprises and prevents the disadvantage of individual farmers in trading with companies. Li et al. sampled and analyzed 535 beekeeping cooperatives in China, 180 of which had between 1000 and 5000 bee colonies and 40 of which had over 10,000 bee colonies. A total of 95.7% of beekeeping cooperatives were registered, and 78.31% of beekeeping cooperatives were established after 2011 [4]. Li's results show that beekeeping cooperatives in China are becoming increasingly standardized and have also become the main mode of raw honey production.
With the rapid development of apiculture in China, the quantity of raw honey produced by beekeepers has greatly increased, and many honey product manufacturing enterprises have mushroomed. China's honey products not only meet domestic demand but are also exported to many countries overseas, and overall production is among the largest in the world. In 2019, the export volume of honey products was approximately 124,494 tons, and their export value was USD 294 million [5]. Due to the popularity of honey products and the increasing demand by urban residents, interests drive adulterated products to constantly emerge in the market.
There are various types of bee products, such as honey, royal jelly, bee pollen, propolis, wax, bee venom, and bee bread [6]. The most common and most consumed bee products in the market are honey products. Studies have proven that honey has anti-inflammatory, antibacterial, and antioxidant properties and that it helps lower blood pressure and blood lipids. Therefore, honey products are also widely used as ingredients in apitherapy and healthcare food [7][8][9]. There are many types of honey products; they vary depending on nectar sources and processing techniques. There are adulterated versions of every type of honey product, which are linked to the production and sales of unadulterated honey products. Furthermore, beekeeping cooperatives and honey product enterprises usually vary greatly in size, and even the same types of products lack unified industrial standards. As a result, adulteration may occur in all aspects of honey production. As for raw honey, the most common adulteration practices are feeding bees sugar and adding sweeteners (such as caramel, fructose, and corn syrup) to honey [10]. For honey enterprises, there are many ways to produce adulterated honey products, including the use of adulterated raw honey as raw materials, the blending of syrups into products, and the synthesis of chemical materials. [6] Moreover, the government's supervision of honey products in China is still in its infancy. The above factors provide fertile ground for the adulteration of raw honey and honey products, and it is often difficult for ordinary consumers to identify authentic honey products, which increases the prevalence of adulterated honey products. The spread of adulterated honey products in China has led to a distrust of honey products and doubts about the credibility of the government. According to the official report of the European Union, one coordinated action confirmed that a significant part of honey imported into the EU is suspicious of adulterated products (46% based on 320 samples), and the highest absolute number of suspicious consignments originated from China (74%). [11] Products exported overseas are frequently returned due to their substandard quality, which not only causes economic losses but also has a negative impact on the international reputation of manufacturing in China.
The evolutionary game model has been widely used in drug supervision and management [32], public transportation management [33], new energy use promotion [34], the management of the utilization of wild animal and plant resources [35], and other fields [36]. The evolutionary game model has positive guiding values for practical management by simulating and predicting the behavior of different stakeholders. The theoretical model predicts that the players of the game will gradually reach equilibrium, but changing the conditions will speed up or slow down the evolution time for different stakeholders to reach equilibrium [37]. Therefore, changing the external conditions can provide guiding significance to all parties involved. Different from the supervision of drugs and public transportation, the supervision of honey products (SHP) involves the entire industrial chain, from raw honey to honey products; therefore, honey product supervision is more complicated. This paper established an evolutionary game model of different stakeholders in the regulation of honey products in China (SHP-game) and analyzed the influence of different factors on the evolution process of the game system to provide guidance for the regulation of honey products in China and to accelerate the reduction of adulteration in the market from the perspective of management.
Assumptions of Game Model
(1) Problem description The relationship between different stakeholders in the SHP-game model is shown in Figure 1.
ducing the adulteration of honey products depends on the supervision and regulation of the whole industry by local governments, and only regulation by local governments may possibly eliminate the prevalence of adulteration in the honey product industry. The evolutionary game model has been widely used in drug supervision and management [32], public transportation management [33], new energy use promotion [34], the management of the utilization of wild animal and plant resources [35], and other fields [36]. The evolutionary game model has positive guiding values for practical management by simulating and predicting the behavior of different stakeholders. The theoretical model predicts that the players of the game will gradually reach equilibrium, but changing the conditions will speed up or slow down the evolution time for different stakeholders to reach equilibrium [37]. Therefore, changing the external conditions can provide guiding significance to all parties involved. Different from the supervision of drugs and public transportation, the supervision of honey products (SHP) involves the entire industrial chain, from raw honey to honey products; therefore, honey product supervision is more complicated. This paper established an evolutionary game model of different stakeholders in the regulation of honey products in China (SHP-game) and analyzed the influence of different factors on the evolution process of the game system to provide guidance for the regulation of honey products in China and to accelerate the reduction of adulteration in the market from the perspective of management.
Assumptions of Game Model
(1) Problem description The relationship between different stakeholders in the SHP-game model is shown in Figure 1. There are three main participants in the SHP-game model, namely beekeeping cooperatives (BCs), honey product enterprises (HEs), and local governments (LGs). BCs provide raw honey to BEs. BEs produce various commercial honey products and put them on the market, while the LGs supervise the behaviors of BCs and HEs ( Figure 1). In China, BCs are established by bee farmers through certain agreements. In many areas, raw honey materials are uniformly sold by BCs to HEs. In this way, bee farmers can guarantee the sales of raw honey and avoid losses caused by price fluctuations; at the same time, HEs can guarantee a sufficient supply of raw honey. In order to encourage BCs to supply unadulterated raw honey to BEs and to encourage HEs to produce products that meet quality standards, the LGs have corresponding subsidies for BCs and HEs. In contrast, LGs punish the BCs and HEs that adulterate honey to stop this practice. The parameters of relevant stakeholders in the SHP-game model are shown in Table 1.
C u
The cost of producing unadulterated raw honey for cooperatives. R u The price of unadulterated raw honey. S g Subsidies for cooperatives that produce unadulterated raw honey. C a The cost of producing adulterated raw honey. R a The price of adulterated raw honey. P a Government penalties to cooperatives that produce adulterated raw honey. C q The cost of producing qualified honey products. R q The price of qualified honey products. I q Government incentives to enterprises that produce qualified honey products. C f The cost of producing adulterated honey products. R f The price of adulterated honey products. P f Government penalties to enterprises that produce adulterated honey products. C g Cost of government supervision. R g The benefits governments gain from the qualified bee products in the market. P g Economic losses of governments caused by adulterated products in the market.
(2) Model hypothesis Based on the above relationship, some complex conditions can be simplified without changing the nature of the problem, and the following assumptions are made: 1 The three parties are all participants of bounded rationality, and the strategy selection gradually evolves to the optimal strategy over time; 2 The strategies set for BCs are to produce unadulterated raw honey (UC) and produce adulterated raw honey (AC); the strategy set for HE is to produce qualified bee products (QE) and produce fake (adulterated) bee products (FE); the strategy set for LG is to supervise (SG) and not supervise (NG). The game tree and payoff matrix are shown in Figure 2; ④ The cost of producing qualified products (Cq) is greater than that of adulterated products (Cf), and the price of qualified products (Rq) is higher than that of adulterated products (Rf); ⑤ The cooperatives, enterprises, and local governments act to maximize their interests.
Replicator Dynamic Equation
Evolutionary game theory is a combination of game theory and dynamic evolutionary process analysis, with an emphasis on dynamic equilibrium [37]. According to evolutionary game theory, if the payoff of a certain strategy is higher than the average payoff of the population, the percentage of individuals adopting this strategy in the population will gradually increase, and its growth rate can be obtained by the replicator dynamic 3 The cost of producing unadulterated honey (C u ) is greater than that of adulterated materials (C a ), and the price of unadulterated honey (R u ) is higher than that of adulterated materials (R a ); 4 The cost of producing qualified products (C q ) is greater than that of adulterated products (C f ), and the price of qualified products (R q ) is higher than that of adulterated products (R f ); 5 The cooperatives, enterprises, and local governments act to maximize their interests.
Replicator Dynamic Equation
Evolutionary game theory is a combination of game theory and dynamic evolutionary process analysis, with an emphasis on dynamic equilibrium [37]. According to evolutionary game theory, if the payoff of a certain strategy is higher than the average payoff of the population, the percentage of individuals adopting this strategy in the population will gradually increase, and its growth rate can be obtained by the replicator dynamic differential equation. Thus, the replicator dynamic equation describes the variation in the frequency of a particular strategy adopted by a population over time [38]. The higher the replicator dynamic value, the more the proportion of the strategy will increase.
According to the above payment matrix, the expected payoff of BCs that use the UC strategy (E 11 ), AC strategy (E 12 ), and average expected payoff of BCs (E 1 ) can be calculated, respectively, by the following: The replicator dynamic equation of UC strategy is The expected payoff of HEs that use the QE strategy (E 21 ), FE strategy (E 22 ), and average expected payoff of HEs (E 2 ) can be calculated, respectively, by the following: The replicator dynamic equation of the QE strategy is The expected payoff of the SG strategy (E 31 ), WG strategy (E 32 ), and average expected payoff of LGs (E 3 ) can be calculated, respectively, by the following: The replicator dynamic equation of the SG strategy is
Stability Analysis of the Evolutionary Game Model
When the replicated dynamic equation of the UC strategy, QE strategy, and SG strategy is 0, the system is in equilibrium, that is According to the replicated Dynamic Equation (13) The equilibrium point constitutes the boundary of the solution domain {(x * , y * , z * )|0 < x * < 1; 0 < y * < 1; 0 < z * < 1}, and the surrounding area is the equilibrium solution domain of the three stakeholders. Because the asymptotically stable solution of the multi-agent evolutionary game must be a strict Nash equilibrium, only the equilibrium point M 1 -M 8 needs to be considered, and the stability of each equilibrium point should be further analyzed.
In this model, the Jacobian matrix is as follows: When all eigenvalues of the Jacobian matrix are negative, the equilibrium point is an evolutionary stable strategy (ESS). According to the hypothesis of the game system (see Assumption 2.1 of the game model), the positive or negative signs of some eigenvalues can be determined. The eigenvalues of the Jacobian matrix that correspond to each equilibrium point are shown in Table 2. It can be seen in the eigenvalues of the Jacobian matrix that the game system has three different ESS under different conditions. The three ESSs are M 1 (0, 0, 0), M 2 (1, 0, 0), and M 5 (1, 1, 0).
Case 1: It can be seen in Table 2 that two inequalities need to be satisfied simultaneously to achieve Stability Point M 1 (0,0,0). According to the first inequality, R u − C u < R a − C a , when the benefit of the UC strategy is less than the benefit of the AC strategy, the cooperative will choose to produce and provide adulterated raw honey to the beekeeping enterprise. According to the second inequality, P f + P g < C g , when the sum of the economic losses and penalties caused by adulterated honey products is less than the supervision cost, the government will choose not to supervise.
Case 2: Three inequalities need to be satisfied simultaneously to achieve Stability Point M 2 (1,0,0). According to the first inequality, R a − C a < C u − R u , when the benefit of the UA strategy is less than the benefit of the UC strategy, the cooperative will choose to produce and provide unadulterated raw honey to the honey product enterprise. According to the second inequality, R q − C q < R f − C f , when the benefit of the QE strategy is less than the benefit of the FE strategy, the HE will choose to produce adulterated products. According to the third inequality, P f + P g < C g , when supervision costs outweigh the sum of the economic losses and penalties caused by adulterated products, the government will choose not to supervise.
Case 3: There is only one condition that needs to be met to achieve Stability Point M 5 (1,1,0), that is R f − C f < R q − C q . This restriction suggests that the game system will gradually reach (1,1,0) as long as the benefits of qualified honey products outweigh the benefits of adulterated products.
Through the above stability analysis, the game system has different ESSs under different conditions. M 5 (1,1,0) is the ideal state among different ESSs. To ensure that the bee product industry can achieve the ideal state, LGs need to ensure through supervision that the benefits of qualified products outweigh the benefits of adulterated products. By using real survey data, we simulated the influence of key factors on the evolution process of the stakeholders under the ideal ESS state.
Simulation Analysis of Main Influencing Factors
Based on the above model analysis, simulation analysis is used to simulate the dynamic evolution process of the UC strategy, QE strategy, and SG strategy in the game. Parameter values are set in accordance with the literature and market data; we also always refer to the relationship between different parameters in real life. The initial assignment for each parameter is shown in Table 3. The phase diagram with the initial parameters is shown in Figure 3. In this study, the effects of the five main parameters on the evolutionary process of the game system were evaluated: penalty for adulterated BC (P a : from 10 CNY/kg~30 CNY/kg), penalty for adulterated HE (P f : from 20 CNY/kg~60 CNY/kg), subsidy to BC (S g : from 3 CNY/kg~8 CNY/kg), price of unadulterated raw honey (R u : from 16.55 CNY/kg~21.55 CNY/kg) and price of unadulterated bee products (R q : from 39.9 CNY/kg~54.9 CNY/kg). These five parameters have more flexible variability and operability in management practice. In the simulation figures, the y-axis represents the probability of a certain strategy. The evolutionary time (i.e., x-axis) stands for the normalized development time after a certain evolution mode begins. It is a normalized time parameter with no unit [40].
Effect of Main Parameters on the Evolutionary Process of the UC Strategy
The effects of P a , P f , R u , R q , and S g on the probability of the UC strategy under the ESS of (1, 1, 0) are presented in Figure 4. The results show that P a and R u have obvious effects on the strategy choice of BCs. With the increase in P a and R u , the probability of BCs producing unadulterated raw honey increases noticeably ( Figure 4A,C). This indicates that LG penalties for adulterated BCs and BCs' price adjustment of unadulterated raw honey are effective management tools to eliminate the adulteration behavior of BCs. 1, 1, 0), subfigures A-E represent the effects of Pa, Pf, Ru, Rq, and Sg on the evolution of UC strategies, respectively. 1, 1, 0), subfigures A-E represent the effects of P a , P f , R u , R q , and S g on the evolution of UC strategies, respectively. Figure 4B,E indicate that P f and S g have a smaller degree of influence on the strategy choice of BCs compared to P a and R u . With the increase in P f and S g , the probability of a BC producing unadulterated honey increases slightly at the same evolutionary time. Figure 4D shows that price adjustment of unadulterated bee products has no impact on the strategy choice of BCs.
Effect of Main Parameters on the Evolutionary Process of the QE Strategy
The effects of the five main parameters on the probability of enterprises' QE strategy under the ESS of (1, 1, 0) are presented in Figure 5. The results show that P f has the most obvious effects on the strategy choice of HE. With the increase in P f , the probability of HE producing qualified bee products noticeably increases ( Figure 5B), which indicates that LG penalties for adulterated HE is an effective management tool to eliminate the adulteration behavior of HE. Price adjustments of unadulterated honey products also have a positive effect on the probability of the QE strategy. With the increase in R q , the probability of HE producing qualified bee products noticeably increases ( Figure 5D), although the effect is not as strong as P f at the same evolutionary time. Increasing the price of qualified bee products is an alternative management tool to reduce HE's adulteration behavior. Figure 5A indicates that P a has a minor influence on BE strategy choice. Figure 5C,E show that R u and S g have no influence on BE strategy choice.
Effect of Main Parameters on the Evolutionary Process of LG's SG Strategy
The effects of the five main parameters on the probability of the SG strategy under the ESS of (1, 1, 0) are presented in Figure 6. The results show that all five parameters have obvious effects on the strategy choice of LGs. P f and P a have the most obvious influence on the probability of the SG strategy. The probability of LGs to supervise increases with the increase in P a and P f ( Figure 6A,B). R u and R q have a medium effect on the probability of the SG strategy at the same evolutionary time. In contrast to the effect of P a and P f , the probability of the SG strategy decreases with the increase in R u and R q . 1,1,0), , subfigures A-E represent the effects of P a , P f , R u , R q , and S g on the evolution of QE strategies, respectively. 1,1,0), subfigures A-E represent the effects of P a , P f , R u , R q , and S g on the evolution of SG strategies, respectively.
Discussions and Conclusions
Apiculture contributes to ecological restoration and poverty eradication in remote regions [41,42]; therefore, the local governments of China have encouraged the development of apiculture by providing free training and financial subsidies for bee farmers. As a result, the quantity of raw honey and honey products has grown rapidly in recent years. Driven by profits, the adulteration of bee products is on the rise [6]. It is urgent that the government supervises the adulteration of honey products. In this paper, a tripartite evolutionary game model consisting of beekeeping cooperatives, enterprises, and governments has been established. The model simulated the behavior of each stakeholder in the process of bee product supervision to provide guidance for honey product supervision.
The model introduces both subsidy and punishment policies, which is in line with the current situation in China [4,5]. Meanwhile, for the sake of analysis, the model assumes that the implementation of subsidy and punishment strategies for cooperatives is targeted at individual cooperatives rather than individual bee farmers within cooperatives. The paper analyzes the evolution path of beekeeping cooperatives, enterprises, and governments in the game system, aiming to improve government management and speed to end the epidemic of adulterated honey products. Through simulation, this paper illustrates the specific impact of different factors on the evolution of the three parties involved.
(1) A measure that might be used by cooperatives in management practices is to adjust the price of raw honey. As demonstrated by our simulation, if cooperatives increase the price of unadulterated raw honey, they will reach the ESS faster; however, this has no impact on the evolution process of enterprises and governments. By increasing the price of unadulterated raw honey, cooperatives using the UC strategy can gain a bigger profit advantage over cooperatives that use the AC strategy, thus promoting the probability of the UC strategy in cooperatives; (2) As with cooperatives, if enterprises increase the price of unadulterated bee products, the probability of enterprises using the QE strategy will increase because of the profit advantage it provides compared to the FE strategy; (3) The behavior of local governments may have an impact on the evolution of the stakeholders in many aspects [32][33][34]36].
Local government' subsidies to cooperatives have no obvious impact on the evolution process of BCs and BEs. Giving penalties to cooperatives and enterprises that adulterate honey products can effectively increase the proportion of adopting the UC and QE strategies. In the field of new energy and public transport promotion, government subsidies play an important role in promoting the evolution of the system [33,34]. This is because governments use high subsidies to attract stakeholders to use new energy technologies or public transportation [33,34]. However, in the SHP-game model, subsidies from local government usually do not exceed the cost of cooperatives or enterprises; therefore, the impact subsidies have an impact on the value of the overall industry that is too small to affect the actual evolution path of the three parties. Penalties to cooperatives and enterprises that adulterate honey products are generally at least equal to, and sometimes several times greater than, R a or R f , which is the same as in the drug supervision game [32]. The simulation shows that the effect of punishment in the SHP-game is the same as that in the drug supervision game, which can effectively change cooperatives' and enterprises' behavior.
To sum up, this paper draws important conclusions regarding the SHP-game system. Generally, local governments' subsidies to cooperatives have little impact on the evolution path of all stakeholders in the game. LG penalties to BCs and BCs' price adjustment of unadulterated honey are effective management tools to reduce the adulteration behavior of BCs.
LGs' penalty for adulterated BEs is an effective management tool to reduce the adulteration behavior of BEs. | 6,146 | 2023-04-01T00:00:00.000 | [
"Economics"
] |
Exploring the fragmentation efficiency of proteins analyzed by MALDI-TOF-TOF tandem mass spectrometry using computational and statistical analyses
Matrix-assisted laser desorption/ionization time-of-flight-time-of-flight (MALDI-TOF-TOF) tandem mass spectrometry (MS/MS) is a rapid technique for identifying intact proteins from unfractionated mixtures by top-down proteomic analysis. MS/MS allows isolation of specific intact protein ions prior to fragmentation, allowing fragment ion attribution to a specific precursor ion. However, the fragmentation efficiency of mature, intact protein ions by MS/MS post-source decay (PSD) varies widely, and the biochemical and structural factors of the protein that contribute to it are poorly understood. With the advent of protein structure prediction algorithms such as Alphafold2, we have wider access to protein structures for which no crystal structure exists. In this work, we use a statistical approach to explore the properties of bacterial proteins that can affect their gas phase dissociation via PSD. We extract various protein properties from Alphafold2 predictions and analyze their effect on fragmentation efficiency. Our results show that the fragmentation efficiency from cleavage of the polypeptide backbone on the C-terminal side of glutamic acid (E) and asparagine (N) residues were nearly equal. In addition, we found that the rearrangement and cleavage on the C-terminal side of aspartic acid (D) residues that result from the aspartic acid effect (AAE) were higher than for E- and N-residues. From residue interaction network analysis, we identified several local centrality measures and discussed their implications regarding the AAE. We also confirmed the selective cleavage of the backbone at D-proline bonds in proteins and further extend it to N-proline bonds. Finally, we note an enhancement of the AAE mechanism when the residue on the C-terminal side of D-, E- and N-residues is glycine. To the best of our knowledge, this is the first report of this phenomenon. Our study demonstrates the value of using statistical analyses of protein sequences and their predicted structures to better understand the fragmentation of the intact protein ions in the gas phase.
Introduction
Top-down proteomic (TDP) analysis involves the identification of the mature sequence and posttranslational modifications (PTM) of undigested proteins using mass spectrometry (MS), tandem mass spectrometry (MS/MS) and a variety of gas phase dissociation techniques.These dissociation techniques include collision-induced dissociation (CID) [1], collision-activated dissociation (CAD) [2], high energy dissociation (HCD) [3], sustained-off-resonance irradiation (SORI)-CAD [4], surface-induced dissociation (SID) [5], in-source decay (ISD) [6], post-source decay (PSD) [7], blackbody infrared radiative dissociation (BIRD) [8], ultraviolet photodissociation (UV-PD) [9], electron capture dissociation (ECD) [10], electron transfer dissociation (ETD) [10], and many others.These dissociation techniques can be broadly grouped as either ergodic or nonergodic.Ergodic techniques (CID, CAD, SORI-CAD, HCD, SID, PSD, BIRD) involve depositing energy into a protein ion in the gas phase such that it is redistributed amongst all the rotational/ vibrational modes of the molecule over a timescale of microseconds (μs), milliseconds (ms), or seconds (s) after which the metastable protein ion dissociates, resulting in detectable fragment ions.Non-ergodic techniques (ECD, ETD, UV-PD, ISD) involve bond cleavage as a resultof proton/electron recombination or by absorption of UV photons.Unlike ergodic dissociation techniques, non-ergodic techniques have the advantage that PTMs attached at residue side-chains can be localized to specific residues, whereas ergodic techniques may result in dissociative loss of the attached PTM before its location has been determined definitively.
Electrospray ionization (ESI) is generally favored for TDP analysis as it results in multiply charged (protonated) higher charge state protein ions bringing the mass-to-charge (m/z) of protein ion within the m/z range of most mass analyzers as well as increasing coulomb repulsion during gas phase dissociation and facilitating electron/proton recombination reactions integral to ECD, ETD, and ISD [11].The other soft ionization technique, matrix assisted laser desorption/ionization or MALDI [12], has found use for TDP analysis in taxonomic identification of bacterial microorganisms and mass spectrometry imaging (IMS) [13].MALDI is frequently (although not exclusively) coupled to time-of-flight (TOF) mass analyzers for analyzing low charge protein ions generated by MALDI [14].When MALDI is coupled with TOF and tandem TOF or TOF-TOF platforms, there are some limitations that restrict its use for TDP analysis.First, there are a relatively small number of dissociation techniques: ISD, high energy CID and PSD.Second, these platforms have limited resolution and mass accuracy compared to other mass analyzers, e.g.Orbitrap and FT-ICR.Third, ion isolation for MS/MS has limited resolution, as it relies on spatially separating Gaussian-shaped ion packets based on their arrival time at a mass gate.Fourth, switching rapidly from MS to MS/MS mode is currently not possible.In spite of these limitations, MALDI-TOF-TOF has some attractive features for TDP analysis: generation of low charge state fragment ions (often +1) that are often easy to assign, analysis without prior sample fractionation such as liquid chromatography (protein ions can be resolved and isolated by the first TOF stage of TOF-TOF platforms for MS/MS), ease of MALDI sample preparation, and speed of data acquisition and analysis.
Our laboratory and others [15][16][17][18][19][20] have demonstrated the utility of MALDI-TOF-TOF and MS/MS-PSD in identifying non-digested protein biomarkers from complex unfractionated bacterial samples.Complex mixtures of proteins can be analyzed directly, allowing for rapid analysis.However, the fragmentation efficiency can vary widely amongst these low charge state protein ions.PSD is an ergodic dissociation technique that results in polypeptide backbone cleavage on the C-terminal side of aspartic acid (D), glutamic acid (E) and asparagine (N) residues as well as on the N-terminal side of proline residues (P), resulting in b-type and ytype fragment ions (as well as dissociative losses of water and ammonia) [18].The mechanism of backbone cleavage is commonly referred to as the aspartic acid effect [21][22][23][24].Some early studies have explored the gas phase dissociation of peptides [25] and intact proteins [21,26] by PSD.It is generally understood that many factors, such as the amino acid composition, sequence and size contribute to its fragmentation pattern and efficiency.Previous statistical analysis of factors affecting fragmentation (via MALDI TOF MS/MS and ESI ion trap MS/MS) has generally focused on the cleavage residue; for instance, the N-terminal adjacent residue and C-terminal adjacent residue [27][28][29] and the types of ions observed [27,28].However, these studies were done within the context of bottom-up proteomics-on peptides and focused on CID.
Studies on the effects of intact protein properties regarding fragmentation efficiency by PSD is lacking compared to studies on peptides, presumably due to their more complex structure.In this work, we use a statistical approach to explore the effects of various properties of intact proteins on fragmentation efficiency by PSD.We identify fragment signals from MS/ MS-PSD spectra of proteins analyzed via MALDI-TOF-TOF, compare the data to predicted MS/MS-PSD fragments and assign them a score based on their abundance.We then predict their corresponding protein structures and extract various structural and biochemical properties.In our analysis, we examine fourteen of these properties (ten numerical and four categorical) in relation to the signal score for D-, E-, N-residue fragments resulting from PSD.
Sample preparation
Bacterial sample preparation and mass spectrometry data acquisition has been described in detail previously [15].Handling of bacterial samples was performed in a Class II biohazard cabinet (Baker Company).Briefly, a bacterial strain was cultured on Luria-Bertani agar (Ther-moFisher) overnight at 37˚C in a static incubator.One to two μL of cells were harvested with a sterile 1 μL loop and transferred to 300 μL of extraction solution in a 2 mL, O-ring-lined, screw-cap microcentrifuge polypropylene microvials (Biospec Products, Bartlesville, OK).The extraction solution was either HPLC grade water (Fisher Chemical) or 33% acetonitrile (Fisher Chemical), 67% water and 0.2% trifluoroacetic acid (Sigma-Aldrich, St. Louis, MO).Approximately 30 mg of 0.1 mm diameter zirconia/silica beads (Biospec Products) were added to the tube.The tube was tightly capped and agitated with a mini-bead-beater for 2 minutes (Biospec Products).The tube was then centrifuged for 3 minutes at 13,000 rpm (Eppendorf, Germany).
Mass spectrometry
1.5 μL of sample supernatant was spotted onto 384-spot stainless steel MALDI target (Sciex, Redwood City, CA) and allowed to dry.The dried sample spot was then overlayed with 1.5 μL of a saturated solution of sinapinic acid (Life Technologies, ThermoFisher) dissolved in a solution of 33% acetonitrile, 67% water and 0.2% trifluoroacetic acid.Redissolved sample with matrix was then allowed to dry.
MS and MS/MS data was collected on a 4800 MALDI-TOF-TOF mass spectrometer (Sciex, Redwood City, CA) equipped with a pulsed solid-state YAG laser (λ = 355 nm, τ = 5 ns) with a 200 Hz repetition rate.MS data was collected in linear mode.After a brief delay (~1 μs) following the laser pulse, ions were accelerated from the source at 20.0 kV after which they strike the linear detector.The m/z range was 2000 to 20,000.MS data was collected, summed and signal averaged from 1000 laser shots.MS linear mode was externally calibrated with the +1 and +2 charge states of cytochrome-C, myoglobin and lysozyme (Sigma-Aldrich, St. Louis, MO).
MS/MS-PSD data was collected in reflectron mode wherein after a brief delay (~300 ns) following the laser pulse, ions were accelerated from the source at 8.0 kV.Upon reaching the timed-ion selector or TIS (a mass gate that selects the precursor ion based on its m/z and thus its arrival time), the selected precursor ion transits the TIS gate unimpeded where ions arriving outside the TIS window too soon or too late, are blocked.A typical TIS window is manually set to the precursor mass ± 100 Da.The TIS window was narrowed further, when necessary, to exclude fragment ions from neighboring protein ions if present.After the TIS, the mass-selected precursor ion was then decelerated to 1.0 kV after which it enters the collision cell.As no collision gas was introduced into the collision cell, any fragmentation is due to post-source decay (PSD), i.e. delayed fragmentation resulting from internal energy acquired by the ion during the ionization/desorption process in the source.After the collision cell, fragment ions and unfragmented precursor ion were re-accelerated to 15.0 kV.A metastable suppressor (another mass gate) was used to block any unfragmented precursor ion from advancing to the reflectron mirror to increase the detection sensitivity of fragment ions.Fragment ions were reflected nearly 180˚by a 2-stage reflectron mirror: mirror #1: 10.515 kV and mirror #2: 18.330 kV) after which ions strike the reflectron detector.The MS/MS m/z range spans from 9.0 to above (+500 to 1000) the m/z of the precursor ion.MS/ MS data was collected, summed and signal averaged from 10,000 laser shots.MS/MS reflectron mode was externally calibrated with the PSD fragment ions of singly charged alkylated thioredoxin.
Data was viewed using Data Explorer1 software (Version 4.9, Sciex, Redwood City, CA).Raw MS/MS data was processed in the following sequence: Advanced baseline correction (Baseline correction parameters: Peak width: 32; Flexibility: 0.5; Degree: 0.0), Noise removal (Std dev to remove: 2.00) and Gaussian smoothing (Filter width: 31 points).The processed MS/MS data was then centroided and exported as an ASCII spectrum consisting of two columns of data: m/z and absolute intensity.Processed and centroided MS/MS data are provided at https://github.com/jpark837/PSD.
Extraction of protein properties
The protein properties analyzed in this work are sequence and structurally based.We used Alphafold2 (version 2.2.0) to predict the structure of each of the bacterial proteins using the default databases [30].We then selected bacterial proteins that were pre-identified for which MS/MS-PSD data was available.We wrote a pipeline in python to extract 14 properties for each instance of either a D-, E-, or N-residue from the proteins.We used PyMol (Schro ¨dinger) to count the number of intramolecular backbone and sidechain hydrogen bonds, as well as to check for a salt bridge presence for each residue instance.For hydrogen bonds, we considered electrostatic pairings of the protonated lysine (K) and arginine (R) residues with deprotonated aspartic acid (D) and glutamic acid (E) residues.We chose a bond length range under 4.0 Å for salt bridges [31].
Secondary structure assignment and relative solvent accessible surface area calculations were done using the DSSP program [32].The remaining numerical properties (degree, clustering coefficient, closeness, betweenness, eigenvector centrality, eccentricity, average nearest neighbor degree and strength) are centrality measurements from residue interaction network (RIN) analysis [33].We used the Network Analysis or Protein Structure (NAPS) webserver for prediction and centrality analysis of the RIN for each protein [34].For the NAPS webserver, we used the following options: C-alpha network type, weighted, threshold of 0-7 Å, and residue separation of 1.For comparison between networks, we adjusted eccentricity to be normalized to the protein diameter [34].The protein diameter is the maximum eccentricity value of the network.
Alphafold2 predicted protein structures and the code used to extract the structural properties and accompanying data are available at https://github.com/jpark837/PSD.
Computational and statistical analyses
All Alphafold2 predictions were run on a GPU node through the USDA-ARS Scientific Computing Initiative (SCINet) Ceres high-performance computing (HPC) cluster.
All statistical analyses and plot generation was done using Python and R.
For multivariate regression analysis, we assumed the response variable Y (signal score) to follow a negative binomial distribution with a mean of E[Y] = μ and let x p be a set of explanatory variables (extracted properties).μ is then related to the explanatory variables as Eq 1.We scaled the explanatory variables from 0-1 for comparative interpretation before fitting the linear model to our data containing the signal score and property values for each fragment.
To analyze the significance of the categorical properties (secondary structure, N-terminal adjacent residue, C-terminal adjacent residue and salt bridge presence), we performed the Kruskal-Wallis test to check if any groups within each property deviates significantly.We then performed the pairwise Mann-Whitney U test to identify the group within each categorical property that was significantly different.
For analysis of the categorical properties (N-terminal adjacent residue and C-terminal adjacent residue), we used all 36 bacterial proteins, as they only depend on the protein sequence.For the remaining categories, we removed 3 bacterial proteins that had a poor average predicted local distance difference test (pLDDT) score below 70 (S1 Table ), as these properties depend on the predicted protein structure from Alphafold2.
Calculation of signal scores
We selected 36 bacterial proteins for which MS/MS data was available for analysis (S1 Table and at https://github.com/jpark837/PSD).A typical example of MS and MS/MS data is shown in Fig 1 wherein a protein biomarker is identified from its intact mass by MS and its characteristic fragment ions obtained by MS/MS.Each protein in our study was previously identified by top-down proteomic analysis and confirmed by manual inspection comparing observed fragment ions to that of in silico fragment ions of the identified protein sequence.The aspartic acid effect is the dominant fragmentation mechanism of low charge state protein ions that fragment by PSD.Subsequently, the most prominent fragment ions are the result of backbone cleavage on the C-terminal side of D-, E-and N-residues and on the N-terminal side of P-residues, resulting in characteristic backbone b-type and y-type fragment ions.Isobaric protein ions, i.e. protein ions that have the same nominal m/z and are thus not isolatable from each other by our TIS mass gate, would result in a mixture of fragment ions from both protein ions.Such a circumstance was not observed in the 36 proteins analyzed in this study.All the fragment ions of each MS/MS experiment corresponded to a single protein sequence.
The raw MS/MS data for each protein was processed, centroided and exported as an ASCII spectrum and analyzed (Fig 2).GPMAW (version 13.03) was used to predict the average m/z of b-and y-type fragment ions resulting from in silico backbone cleavage on the C-terminal side of D-, E and N-residues for each protein sequence [35].In silico fragment ions generated by GPMAW are provided at https://github.com/jpark837/PSD.Our script then matched each predicted fragment ion to the highest signal intensity of the MS/MS data within ± 5 m/z.The script also accounted for loss of ammonia (-17 m/z) and water (-18 m/z) for each fragment ion to separate noise from background as much as possible.Once fragment signals were assigned For each fragment signal, we used Eq 2 to calculate a signal score.The signal score, which we defined as the ratio of the intensity of the fragment signal (u) and the standard deviation (σ) of the background (Eq 2), was our metric for fragmentation efficiency.A higher signal score indicates a higher likelihood of polypeptide backbone cleavage at that residue position, as the resulting fragment ion is more abundant.The standard deviation of the background was to normalize varying noise between MS/MS data.
Backbone cleavage at E and N-residues have similar efficiencies
Initially, we noticed the distribution of our response variable, the signal score of each fragment, to overlap each other for E-and N-residues (Fig 3A).Plots of the empirical cumulative distribution function (eCDF) of signal scores for D-, E-and N-residues confirmed this observation, as we also saw the eCDFs of E-and N-residues to overlap (Pearson's correlation coefficient = 0.99) (Fig 3B).This overlap indicates that E-and N-fragments have a similar spread of signal scores.In contrast, the eCDF of D-residues was distinct from E-and N-residues in that they were shifted towards the right, as a larger proportion of D-fragments have higher signal
Regression analyses reveal several centrality measures to be significant factors
We also noticed that the signal score for all residues were non-normal and heavily positively skewed (Fig 3A).This shape is characteristic of count-based data, for which there exist discrete probability distributions that provide convenient models for analysis [36,37].We rationalized that by viewing the polypeptide backbone cleavage as an event with a probability of success, we could apply these types of models for our case [36].The clustering of signal scores of D-, Eand N-fragments near 0, alongside extreme outliers at high signal scores, indicates overdispersion (Fig 3A).For protein properties which were numerical (Table 1), we consequently used negative binomial regression to assess the effect of each property on the signal score.The negative binomial distribution allows its variance to differ from its mean, allowing greater flexibility in handling dispersion [38].
A cross correlation matrix of the explanatory variables showed degree and strength to be strongly correlated with each other, as the pairwise Pearson's r correlation coefficient between them was 1 (S1 Fig) .We subsequently removed strength as an explanatory variable from our regression analysis to reduce redundancy.Our regression results for D-,E-, and N-residues are summarized in Table 1.We found various centrality measurements from residue interaction network (RIN) analysis to be significant.In RIN analysis, proteins are drawn as a network, where residues are considered as nodes while contacts between them are considered as edges [34].
For D-residues, relative solvent accessibility, closeness, and eccentricity were significant (p<0.01)explanatory variables.Relative solvent accessibility describes how exposed or buried a residue is in a protein and is an important factor for determining its stability [39,40].The positive value suggests that for D-, the less buried the residue is, the higher the signal score probability is up to a certain extent.D-fragments with relative solvent accessibility values that were in the 50-75% quartile had the highest distribution of signal scores.(Fig 4A).
Closeness is defined as the inverse of the shortest path distance (dist(u,v)) of a node (n) to all other nodes (v) (Eq 3).Closeness is an indicator of how close a node (residue) is to all other nodes in the network [34].A positive coefficient estimate for closeness indicates that residues near other residues path wise are associated with a higher signal score probability, which we also clearly observed in its distribution (Fig 4B ).
Eccentricity is defined as the shortest path distance of the residue to the farthest residue divided by the diameter of the protein (Eq 4).A higher value indicates the residue is closer to the periphery while a lower value indicates the residue is closer to the center [41].The significant, positive coefficient estimate (p<0.01) for eccentricity indicates that D-residues that are closer to the periphery of the protein, but not at its absolute extremity leads to a higher signal score probability.For eccentricity, D-fragments with values that were the lowest 0-25% and the highest 75-100% quartiles had lower distribution of signal scores compared to those within the 25-50% and 50-75% quartiles (Fig 4C ).For E-sidechain hydrogen bond count, closeness, eccentricity, eigenvector centrality, and average nearest neighbor degree were significant (p<0.05)explanatory variables.Sidechain hydrogen bond count is the number of potential hydrogen bonds the sidechain of a residue is involved in within a bond length range between 2.5 and 3.2 Å. E-residues with the highest number of sidechain hydrogen bond counts (75-100% quartile) had the highest distribution of fragment signal scores (Fig 4E).Like D-, E-residues also had a positive coefficient estimate and distribution pattern for closeness (Fig 4F).Similarly for eccentricity, E-fragments that were the highest 75-100% quartiles had the highest distribution of signal scores (Fig 4G).
C e u ð Þ ¼ maxðdistðu; vÞÞ diameter protein ð4Þ
Eigenvector centrality is the eigenvector (x i ) that corresponds to the largest eigenvalue (λ) of the adjacency matrix (A ij ) [34,42] (Eq 5).This centrality metric indicates how connected a node is to other well-connected nodes in the network [34].The negative coefficient estimate is reflected in its distribution, where E-fragments with eccentricity values in the 25-50% quartiles had the highest distribution of signal scores (Fig 4H).
Average nearest neighbor degree is the average of the degree (C d (u)) of a node's direct neighbors (N(u)) (Eq 6) [34].This centrality metric quantifies the dependency between degrees of a node and its neighbors [43].Although the variable was significant (p<0.05) and its coefficient estimate was positive (Table 1), we did not see a clear pattern upon visual inspection of the distribution of E-fragment signal scores with respect to average nearest neighbor For N-residues, only eigenvector centrality was a significant explanatory variable (p<0.05).The coefficient estimate for this variable was positive (Table 1).However, we saw that N-fragments with degree values in the lower 0-25% and 25-50% quartiles had higher distributions of signal scores (Fig 4D), indicating a negative relationship.The lack of significant explanatory variables closeness and eccentricity of N-compared to D-and E-is also interesting.The presence of an amide rather than a carboxylic acid on the side chain may present different behaviors regarding the aspartic acid effect.
Presence of an adjacent C-terminal proline enhances fragmentation
We also analyzed four categorical properties, where we found the C-terminal adjacent residue to be a significant explanatory variable for all three residues (Table 2).The D-G, D-N, D-P, E-L, E-G, N-L, and N-P sequence motifs were found to be significant (p<0.05).Except for the E-L and N-L sequence motifs, the rest led to a higher signal score (Fig 5A -5C).We noticed that when P was present on the C-terminal side of D-and N-residues, the signal score of the fragments were dramatically higher.Indeed, for P-residue fragment ions, the presence of either a D-, E-, or N-residue on the N-terminal side significantly (p<0.00001)led to a higher signal score (48.1 ± 20.1).In contrast, P-residue fragment ions that did not have an adjacent N-terminal D-, E-, or N-residues had a lower signal score of 3.6 ± 1.0 (Fig 5D).E-residue alone did not show the E-P sequence motif to be significant, presumably because there was only one instance of the sequence motif in our dataset.
For glutamic acid, the secondary structure assignment of the residue was also significant (Table 2).T, which stands for turn and designates single helix hydrogen bonds in DSSP, lead to a significantly higher signal score (Fig 5E).In contrast, H, which stands for a 4 residue-turn alpha helix, was significantly lower (Fig 5E) [32,44].
Discussion
The aspartic acid effect is initiated by the transfer of a proton from a carboxylic acid or amide side-chain group to the backbone amine (S2 Fig) [24].Comparing the gas-phase acidities (ΔG gas ) of the side-chain carboxylic or amide hydrogen from aspartic acid (325.9kcal/mol), glutamic acid (324.3kcal/mol) and asparagine (332.7 kcal/mol) [45], we were surprised to find that our distribution of D-, E-, and N-fragment scores did not match this order.Instead, we observed that the efficiency of the C-terminal cleavage at E-and N-residues via PSD were nearly the same and lower than the cleavage efficiency at D-residues (Fig 3B).Alternatively, a combination of the side chain acidity, the basicity of the neighboring amine/imine (presence or absence of a proline), and the length of the side chain could explain the differing abundances between D-,E-, and N-fragments.For instance, although glutamic acid has a more acidic carboxylic proton than asparagine (which has an amide), it has nearly the signal score distribution (Fig 3B).Glutamic acid's side chain is 1 carbon longer, which could deter the rearrangement required for the carboxylic proton to be in closer proximity to the neighboring backbone amine/imine.Aspartic acid has the highest signal score distribution, as it benefits from having a higher side chain acidity (carboxylic proton) and a shorter side chain length.Now consider glutamine, which suffers from both the side chain being less acidic (amide) and having a longer side chain.Although fragmentation at glutamine can occur [16], they are rare and seldom seen [46].From our regression analyses, our results highly suggest that the local structural properties of proteins can affect fragmentation efficiency.For D-and E-residues, closeness was a highly significant (p<0.01)explanatory variable with a positive coefficient, indicating that residues that are near other nodes distance-wise are associated with a higher signal score probability.This could possibly be explained by a higher efficiency of distribution of internal energy.A residue with shorter interaction paths could allow for more energy transfer with less travel time https://doi.org/10.1371/journal.pone.0299287.g005[47].Investigations into the energetics of metastable protein ions post-source would undoubtedly be insightful.In addition, for D-and E-residues, eccentricity was also highly significant (p<0.01),indicating that residues closer the periphery of the protein (although not at the extremity of the periphery) have a higher chance of fragmenting in comparison to those near the center.
We also showed that the presence of P-residues on the C-terminal side of either D-or Nresidues dramatically enhances backbone cleavage.The D-P sequence motif is documented in peptides as well as proteins [21], and our results show that this motif can be extended to N-residues [48].For now, we can only speculate the reason for this enhancement.P-residue is unique in that it is an imino acid-its backbone nitrogen is encircled with its side chain.P-residue can be a proton acceptor and an imine could have higher basicity than an amine in the gas-phase, as it has theoretically been shown in DMSO (S2B Fig) [46].The cyclical nature of Presidues also renders them structurally very rigid, and it has been proposed as a disruptor of secondary structures [49,50].The presence of proline may provide a local environment beneficial for cleavage.It is also possible that the cyclic structure of proline may obstruct efficient transfer of internal energy along the backbone.For instance, an internal energy bottleneck may result in an enhancement of the side-chain rearrangement of D-and N-residues when they are located on the N-terminal side of a P-residue.
Conclusions
Three decades have passed since Yu et al.'s first description of the aspartic acid effect mechanism in protein ions generated by MALDI [21].MALDI, coupled with TOF and TOF-TOF platforms has adaptable applications in high-throughput proteomics, especially in that of rapid protein identification.Despite the demonstrated use of MALDI TOF-TOF in proteomics, the structural and biochemical properties of proteins that affect their dissociation is relatively under-examined and poorly understood.We explore this topic in the context of bacterial proteins using new technologies.Our work highlights the local structural and sequence-based properties that affect their fragmentation via PSD, the main dissociation technique for MS/MS of intact protein ions from unfractionated protein mixtures on MAL-DI-TOF-TOF instruments for which no collision gas is used.The fragmentation bias we observe in this work potentially adds another dimension of the structural and sequencebased information from the proteins researchers identify and analyze.Moreover, our results may be applicable to other MS platforms that can generate low charge state protein ions fragmented by an ergodic dissociation technique as these ionization/dissociation conditions favor the aspartic acid effect fragmentation mechanism.Although our results were obtained within the context of an ergodic dissociation technique, such an analysis may also be useful in the study of gas phase protein ion structures and their fragmentation using non-ergodic dissociation techniques [9,10].
With recent advances in algorithms to reliably predict protein structures, it is important to utilize and further develop rapid mass spectrometry techniques that can confirm theoretical structures.Top-down proteomic analysis, native state mass spectrometry, H/D exchange mass spectrometry and ion mobility mass spectrometry are likely to be the most relevant gas phase techniques for making comparisons to in silico predicted structures, as the mature intact protein have been shown to be retained into the gas phase under certain conditions.Our current work seeks to extract various protein properties from Alphafold2 predictions and compare them to patterns of fragmentation observed for low charge state protein ions.This approach may be of value to other researchers pursuing mass spectrometry-based intact protein analysis whose goal, beyond identification, is structural elucidation.
Fig 1 .
Fig 1. Example MS data of a of a strain of Salmonella enterica subsp.enterica serovar infantis.(A) Linear MS data of bacterial cell lysate.(B) The identified protein sequence (hypothetical/YahO) after removal of its 21-residue signal peptide.An asterisk denotes a site of backbone cleavage with its corresponding b-type and/or y-type fragment ions.(C) MS/MS data of the protein ion at m/z 7666.Fragment ions are identified by m/z (theoretical value in parentheses) and their b-or y-type fragment ion designation.(D) The pre-processed and centroided MS/MS data of the protein ion at m/z 7666.Pre-processed and centroided MS/MS data is shown in Fig 1D.
https://doi.org/10.1371/journal.pone.0299287.g001and separated, our script compared the b-and y-type fragment ion intensity for each backbone cleavage position, then considered the larger of the two as the fragment signal (u).
Fig 5 .
Fig 5. Analysis of categorical explanatory variables.(A) Box plot of fragment signal scores grouped by C-terminal residues adjacent to D. (B) Box plot of fragment signal scores grouped by C-terminal residues adjacent to E. (C) Box plot of fragment signal scores grouped by C-terminal residues adjacent to N. (D) Bar graph comparing the proline fragment signal scores whose adjacent N-terminal adjacent residue was D,E,N or non-D,E,N.(E) Box plot of fragment signal scores grouped by secondary structure of E-residues.Significant explanatory variables p<0.05, p<0.01, p<0.00001 is respectively marked by *,**, and ***** based on the Mann-Whitney U test.Bar graph is displayed as mean ± standard error.
Table 2 . Kruskal-Wallis test of categorical explanatory variables.
Significant explanatory variables p<0.05 and p<0.01 based on the Kruskal-Wallis test are highlighted in yellow and lavender, respectively. | 6,970.6 | 2024-05-03T00:00:00.000 | [
"Chemistry",
"Computer Science"
] |
Computational Design and Biological Evaluation of Analogs of Lupin Peptide P5 Endowed with Dual PCSK9/HMG-CoAR Inhibiting Activity
(1) Background: Proprotein convertase subtilisin/kexin 9 (PCSK9) is responsible for the degradation of the hepatic low-density lipoprotein receptor (LDLR), which regulates the circulating cholesterol level. In this field, we discovered natural peptides derived from lupin that showed PCSK9 inhibitory activity. Among these, the most active peptide, known as P5 (LILPHKSDAD), reduced the protein-protein interaction between PCSK9 and LDLR with an IC50 equals to 1.6 µM and showed a dual hypocholesterolemic activity, since it shows complementary inhibition of the 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMG-CoAR). (2) Methods: In this study, by a computational approach, the P5 primary structure was optimized to obtain new analogs with improved affinity to PCSK9. Then, biological assays were carried out for fully characterizing the dual cholesterol-lowering activity of the P5 analogs by using both biochemical and cellular techniques. (3) Results: A new peptide, P5-Best (LYLPKHSDRD) displayed improved PCSK9 (IC50 0.7 µM) and HMG-CoAR (IC50 88.9 µM) inhibitory activities. Moreover, in vitro biological assays on cells demonstrated that, not only P5-Best, but all tested peptides maintained the dual PCSK9/HMG-CoAR inhibitory activity and remarkably P5-Best exerted the strongest hypocholesterolemic effect. In fact, in the presence of this peptide, the ability of HepG2 cells to absorb extracellular LDL was improved by up to 254%. (4) Conclusions: the atomistic details of the P5-Best/PCSK9 and P5-Best/HMG-CoAR interactions represent a reliable starting point for the design of new promising molecular entities endowed with hypocholesterolemic activity.
Introduction
Hyperlipidemia is a well-known risk factor for developing cardiovascular disease [1]. The most common drugs for hypercholesterolemia treatment are statins, which inhibit 3hydroxy-3-methylglutaryl coenzyme A reductase (HMG-CoAR), the rate-limiting enzyme in cholesterol biosynthesis. This enzyme lowers intracellular cholesterol levels, leading to an increased expression of LDL receptors (LDLR) on cell surfaces and a reduction of serum LDL-cholesterol (LDL-C) via the activation of the sterol-regulatory element-binding protein (SREBP)-2 transcription factor pathway [2]. Although this approach is considered an efficient way to reduce circulating LDL-C, cardiovascular events still occur in some patients. Moreover, statins induce known and serious side effects, such as headache, muscle and joint pain, and a higher risk of developing diabetes.
Another key player in the cholesterol homeostasis is the proprotein convertase subtilisin/kexin 9 (PCSK9) which was discovered in the 2003 and it has been recognized as one of the most promising targets for counteracting hypercholesterolemia and atherosclerotic cardiovascular diseases [3]. Under physiological conditions, the blood-circulating PCSK9 interacts with the low-density lipoprotein receptor (LDLR) on the liver cell membrane, triggering the internalization of the LDLR/PCSK9 complex in a digestive vacuole [4]. Consequently, the main biological activity of PCSK9 is the regulation of the LDLR population on the liver cell surface, resulting in the tuning of the cellular capacity to capture circulating LDL cholesterol (LDL-C). Accordingly, the inhibition of the PCSK9/LDLR interaction leads to an increased LDLR population on the cell membrane, resulting in an enhanced capacity to capture the blood-circulating LDL-C by liver cells [5].
The expression of PCSK9 is also controlled by the activity of SREBP-2 as well as a specific transcriptional activator hepatocyte nuclear factor-1α (HNF-1α) [6], which is a liver-enriched transcription factor regulating many target [7] genes in the liver and intestine. In contrast, the ability of SREBP-2 to co-stimulate the PCSK9 and LDLR expression limits the therapeutic efficacy of statins which are known to produce their effects via SREBP-2 activation. Indeed, it is well documented that statins improve the PCSK9 protein level production through the augmentation of the intracellular HNF-1α levels [8].
Hence, in the last two decades, academia and pharmaceutical companies have financed considerable research on the development of compounds capable of target PCSK9 developing different strategies including siRNA, anti-sense oligonucleotides (ASOs), peptide inhibitors, and monoclonal antibodies (mAbs) [5,[9][10][11][12][13][14][15][16][17][18]. In this field, we have recently sorted out the most potent natural peptide (LILPKHSDAD, P5) derived from a peptic lupin (Lupinus A.) protein hydrolysate [19] with hypocholesterolemic activity [20], which impairs the PCSK9-LDLR interaction with an IC 50 value of 1.6 µM [21]. In parallel, P5 reduces the catalytic activity of HMG-CoAR with an IC 50 value of 147.2 µM [22]; through the inhibition of the enzyme activity, P5 increases the LDLR protein level in HepG2 cells through the activation of SREBP-2, and through a downregulation of HNF-1α, it reduces the PCSK9 protein levels and its secretion in the extracellular environment [22]. This unique synergistic multi-target inhibitory behavior of P5 determines the improved ability of HepG2 cells to uptake extracellular LDL with a final hypocholesterolemic effect. P5 was successfully transported by differentiated human intestinal Caco-2 cells [23] through transcytosis [24], and, during transport, it was partially metabolized in a breakdown fragment (LPKHSDAD, P5-met), which retained the biological activity of the parent peptide [24]. In facts, we have demonstrated that P5-met reduces PCSK9-LDLR binding with a dose-response trend and an IC 50 of 1.7 µM and inhibits the HMG-CoAR with an IC 50 of 175.3 µM [24]. At a cellular level, such as P5, P5-met improves the LDLR and reduces PCSK9 levels, through the modulation of both SREBP-2 and HNF-1α, respectively [24]. Therefore, since P5-met displayed the same activity and behavior of the parent peptide, P5, our results indicated that the first two amino acid residues (LI) do not play a key role in the interaction with both PCSK9 and HMG-CoAR target.
These evidences clearly indicate that P5, with its dual-inhibitory activity, represents a new alternative strategy to the use of single classical PCSK9 and HMG-CoAR inhibitors. Notably, the strategy in which dual-inhibitors are employed may be more effective in overcoming the deficits attributed to the classical use of statins (adverse effects and costimulation of PCSK-9 and LDLR via a common transcriptional activator, i.e., SREBP-2, in statin-treated patients limited the efficacy of these classical HMG-CoAR inhibitors) or PCSK9 inhibitors (including expensiveness, low compliance of the patients, repeated administrations, and injection site irritations) on health and to meet the desired health goals and public priorities in terms of safety and cost-related issues.
Considering these observations, the overall aim of the present study is the development of new P5 analogs able to target both PCSK9 and HMG-CoAR, therefore displaying an improved and dual hypocholesterolemic activity. To achieve this objective, new P5 analogs with improved PCSK9 and HMG-CoAR inhibitory activity were computationally designed [25]. Hence, the theoretical study was validated and confirmed by performing a detailed biological investigation on the most promising P5 analogs. Firstly, their ability to inhibit the protein-protein interaction (PPI) between PCSK9 and LDLR and the HMG-CoAR activity were evaluated using biochemical assays, respectively. Then, their effects on the cholesterol pathway modulation on HepG2 cells were deep characterized, fostering their dual inhibitory cholesterol-lowering activity. More in details, the effect of PCSK9 inhibition by P5 analogs on the LDLR protein levels on the surface of hepatocytes and their effect to improve the functional ability of hepatocytes to uptake LDL from the extracellular environment were assessed by performing in cell-western (ICW) [26] and LDL-uptake assay [27], respectively, in the presence of PCSK9. In parallel, to assess the effects of P5 analogs on the cholesterol pathway upon HMG-CoAR inhibition, western blotting was performed, monitoring the LDLR, SREBP-2, HMG-CoAR, PCSK9, and HNF-1α protein levels, respectively. In addition, by performing ELISA experiments, the effects of P5 analogs on the hepatic secretion of PCSK9 levels were assessed. Finally, LDL-uptake and ICW assays were carried out to investigate the functional ability of hepatic cells to absorb extracellular LDL upon treatment with P5 derived peptides.
System Setup and MD Simulations
The computational models utilized in this study were built starting from the coordinates of the PCSK9/P5 complex model previously reported by us [21]. Here, the starting PCSK9/P5 complex model was additionally equilibrated through 1 µs-long MD simulations, utilizing the pmemd.cuda module of the AMBER20 package [28]. In particular, the ff14SB AMBER force field [29] was used for simulating the protein atoms, while the TIP3P model [30] was used to explicitly represent the water molecules (about 25,000). The sodium ions were added to neutralize the overall charge of the simulation system and the MD trajectories acquired during the production runs were examined by visual inspection by means of VMD [31], ensuring that the thermalization did not cause any structural distortion. This protocol was also utilized to perform MD simulations on the PCSK9 complexes resulting from the mutations of the P5 sequence.
Cluster Analysis
The MD trajectory frames were analyzed by clustering the conformations adopted by the small peptide backbone atoms in complex with PCSK9. The cluster analysis was performed using the cpptraj module [32] of AMBER20 [28]. By this algorithm, the MD frames were divided into clusters by the complete average linkage algorithm, and the PCSK9/peptide complex conformations with the lowest root mean square deviation (RMSD) to the cluster centers (the structures representative of the cluster, SRC) were acquired and visually inspected. Molecular mechanics-generalized born surface area (MM-GBSA) calculations were performed on 100 frames belonging to the most populated cluster of PCSK9/peptide conformations, and the MMPBSA.py module [33] of AMBER20 was used to this aim, keeping parameters in the default values. In these calculations, the single trajectory approach was applied and the entropy contributions to the binding free energy was neglected [25,34]. For this reason, the estimated binding free energy values are termed by us ∆G*.
Alanine Scanning
The nine P5 alanine mutants were built systematically altering the peptide sequence on the PCSK9/P5 complex, by the tleap module of AMBER20 [28]. The resulting complexes were energy minimized and equilibrated by accomplishing 250 ns of MD simulations, adopting the procedure and the parameters previously described for the PCSK9/P5 complex. A hundred of snapshots were regularly extracted from the trajectory frames in which the peptide under investigation displayed the smallest root mean square deviation (RMSD) value fluctuation, to ensure the lowest standard error in the binding free energy calculation. MM-GBSA calculations by MMPBSA.py module [33] were finally performed to estimate the binding free energy values of the mutant peptides.
Computational Design of New P5 Analogs
The "protein preparation wizard" module implemented in the Maestro release 2019-4 (Schrödinger, LLC, New York, NY, USA, 2017) for molecular modeling ensured the accuracy of the PCSK9/P5 complex conformation previously equilibrated by MD simulations. In particular, this module permitted: (1) to check the residue protonation state at pH 7.4, (2) to check the residue completeness, (3) to eliminate atomic clashes, and (4) to assign the OPLS3e force field [35]. Then, the 400 possible peptides were generated by the replacement of the P5 positions 2 and 9 with the twenty natural amino acids. The resulting PCSK9/peptide complexes were minimized by Prime MM-GBSA module of Maestro, which uses OPLS3e [35] as force field and a continuum solvent models to include the solvent effect into the calculations. Then, affinity maturation functionality implemented in Bioluminate module (Schrödinger, LLC, New York, NY, USA, 2017) estimated the change in affinity (∆Affinity) between PCSK9 and the mutant peptides, with respect to P5. Finally, the mutant peptides acquiring the highest gain in ∆Affinity were additionally refined by MD simulations and MM-GBSA calculations by AMBER20 package [28], as it was conducted previously for the alanine-mutant peptides (see the previous section for details).
HMG-CoA Reductase Model Setup and Simulations Protocol
The HMG-CoAR structure solved by X-ray crystallography (PDB accession code 3CCZ) [36] used in this study was deposited as a homotetramer in which the (3R,5R)-7-[2-(4fluorophenyl)-4-{[(1S)-2-hydroxy-1-phenylethyl]carbamoyl}-5-(1-methylethyl)-1H-imidazol-1-yl]-3,5-dihydroxyheptanoic acids are bound in the catalytic sites. For simplicity, we have performed simulations on the functionally active homodimer, choosing as peptide binding site the one identified by the presence of a statin in the X-ray complex. Since the homodimer contained two statin molecules, one of them was removed to allow the docking calculations, while the statin present in the second site was kept in its original position, to avoid any protein conformational distortion inducted by the absence of the ligand. The system was prepared and minimized through the "protein preparation wizard" tool available in the Maestro software (release 2019-4). Peptide docking calculations of [S7A]P5 were performed by using the Glide docking tool [37] of Maestro software, setting as center of the grid the centroid of the statin (residue code 5HI), co-crystallized in the catalytic site of the HMG-CoAR. The "standard precision" mode was applied in these calculations, and the Glide gscore was applied as scoring algorithm. Ten peptides' docking poses were generated, while the number of post-docking minimization poses was set to 50. The formation of cis amide bonds was not allowed. The [S7A]P5 pose with the best-predicted gscore (−9.881 kcal/mol) was selected for the following MD simulations and cluster analysis calculations (Table S1, Supporting Material), adopting the AMBER20 protocol [28] previously described for the studies of the peptides in complex with PCSK9.
Peptide Synthesis
The Genscript (Piscataway, NJ, USA) synthesized for us the P5 analogs selected for biological assays. All compounds are >95% pure by HPLC analysis (see Supporting Information for details).
HepG2 Cell Culture Conditions and Treatment
The HepG2 cell line was bought from ATCC (HB-8065, ATCC from LGC Standards, Milan, Italy) and was cultured in DMEM high glucose with stable L-glutamine, supplemented with 10% FBS, 100 U/mL penicillin, and 100 µg/mL streptomycin (complete growth medium) with incubation at 37 • C under 5% CO 2 atmosphere.
HMG-CoAR Activity Assay
The experiments were carried out following the manufacturer instructions and optimized protocol [38]. See Supplementary Materials for further details.
In Vitro PCSK9-LDLR Binding Assay
Peptides P5 and P5 analogs (0.1-100 µM) were tested using the in vitro PCSK9-LDLR binding assay (CycLex Co., Nagano, Japan) following the manufacture instructions and conditions already optimized [21]. Further details are provided in Supplementary Materials.
In-Cell Western (ICW) Assay
For the experiments, a total of 3 × 10 4 HepG2 cells/well were seeded in 96-well plates and treated with 4.0 µg/mL PCSK9-WT and 4.0 µg/mL PCSK9 + peptides P5 and/or P5 analogs (50 µM) and vehicle (H 2 O) for 2 h at 37 • C under 5% CO 2 atmosphere. Thus, cells underwent to ICW assay following conditions already optimized [26]. See Supplementary Materials for detailed information.
Fluorescent LDL Uptake
HepG2 cells (3 × 10 4 /well) were seeded in 96-well plates and kept in complete growth medium for 2 days before treatment. The third day, cells were washed with PBS and then starved overnight (O/N) in DMEM without FBS and antibiotics. After starvation, they were treated with 4.0 µg/mL PCSK9 and 4.0 µg/mL PCSK9 + P5 and P5 analogs peptides (50 µM), and vehicle (H 2 O) for 2 h with at 37 • C under 5% CO 2 atmosphere. Fluorescent LDL-uptake was finally assessed following optimized protocol [22]. See Supplementary Materials for further details.
Western Blot Analysis
A total of 1.5 × 10 5 HepG2 cells/well (24-well plate) were treated with 50 µM of P5 and P5 analogs for 24 h. Immunoblotting experiments were performed using optimized protocol [39]. See Supplementary Materials for further details.
Quantification of PCSK9 Secreted by HepG2 Cells through ELISA
The supernatants collected from treated HepG2 cells (50 µM of P5 and/or P5 analogs) were centrifuged at 600× g for 10 min at 4 • C and ELISA assay performed using protocol already optimized [40]. Detailed data are provided in Supplementary Materials.
Statistical Analysis
All the data set were checked for normal distribution by D'Agostino and Pearson test. Since they are all normally disturbed with p-values < 0.05, we proceeded with statistical analyses by one-way ANOVA followed by Tukey's post-hoc tests and using Graphpad Prism 9. Values were reported as means ± S.D.; p-values < 0.05 were considered significant.
Identification of Hotspots and Designs of New P5 Analogs
To obtain a robust hypothesis on the peptide P5 binding mode, the PCSK9/P5 complex model we had previously developed [21] was optimized once more by extending the molecular dynamics (MD) simulations to 1 µs (see Figure S1 in the Supporting Information for the RMSD plots). At the end of these simulations, the MD trajectory frames were grouped using a cluster analysis algorithm (see the Materials and Methods Section) to determine which was the most favored P5 conformation in complex with PCSK9. The PCSK9/P5 complex conformation representative of the most populated cluster (78%) suggested that peptide P5 could bind to PCSK9, as illustrated in Figure 1. In particular, P5 could bind to PCSK9 through (1) a salt bridge between the charged NH term of P5-Leu1 and the side chain of Asp238, (2) an H-bond between the imidazole ring of the P5-His6 and the NH group of Ser381, (3) an H-bond network between the side chain of P5-Ser7, the NH of P5-Asp8, and the side chain of Asp367, and (4) an H-bond shaped by the side chain of P5-Asp8 and the side chain of Ser383. The side chain of P5-Leu3 was deeply inserted into a hydrophobic basin sized by the PCSK9 residues Phe379, Pro155, and Ile369, creating van der Waals interactions. mation for the RMSD plots). At the end of these simulations, the MD trajectory frames were grouped using a cluster analysis algorithm (see the Materials and Methods Section) to determine which was the most favored P5 conformation in complex with PCSK9. The PCSK9/P5 complex conformation representative of the most populated cluster (78%) suggested that peptide P5 could bind to PCSK9, as illustrated in Figure 1. In particular, P5 could bind to PCSK9 through (1) a salt bridge between the charged NH term of P5-Leu1 and the side chain of Asp238, (2) an H-bond between the imidazole ring of the P5-His6 and the NH group of Ser381, (3) an H-bond network between the side chain of P5-Ser7, the NH of P5-Asp8, and the side chain of Asp367, and (4) an H-bond shaped by the side chain of P5-Asp8 and the side chain of Ser383. The side chain of P5-Leu3 was deeply inserted into a hydrophobic basin sized by the PCSK9 residues Phe379, Pro155, and Ile369, creating van der Waals interactions. Then, to identify new P5 analogs endowed with improved PCSK9 affinity, we designed new peptides by substituting the P5 residues not considerably involved in the Then, to identify new P5 analogs endowed with improved PCSK9 affinity, we designed new peptides by substituting the P5 residues not considerably involved in the PCSK9 contact (non-hotspot) with new amino acids showing an improved complementarity with the PCSK9 surface [25]. In the first step, the hotspots and non-hotspots of P5 were discovered by performing a computational alanine-scanning mutagenesis analysis, in which all the peptide residues in the PCSK9/P5 complex were systematically mutated into alanine. Specifically, nine 3D models, in which PCSK9 was in complex with each alanine-mutated peptide P5, were simulated by 200 ns-long MD simulations, and the subsequent molecular mechanics-generalized born surface area (MM-GBSA) calculations estimated the mutant peptides' binding free energy values (∆G*, Table 1). Then, by comparing the ∆G* values calculated for the mutant peptides with those calculated for P5, identifying the hotspots and non-hotspots of P5 was possible [40,41]. Table 1. Estimated binding free energy values of the peptides under investigation, as calculated using the MM-GBSA approach (∆G*, column 3).
The attained results suggest that positions 3 and 6 can be considered hotspots, as their mutation into alanine led to P5 analogs endowed with a considerable reduction of the predicted binding affinity (∆∆G* higher than 10 kcal/mol). Specifically, P5-His6 seemed crucial for peptide binding because its substitution led to a dramatic drop in the peptide binding interaction energy. MD simulations suggest that the substitution of the basic side chain of His6 with a methyl group led to a change in the peptide binding mode due to a lack of an H-bond between the PCSK9-Ser282 amide group and the imidazole ring of P5-His6. For this reason, the [H6A] peptide P5 was unbound from the PCSK9 surface after the initial steps of the MD simulations (see Figure S2 in the Supporting Information for the RMSD plots). Similarly, the removal of the side chain of P5-Leu3 led to a peptide incapable of maintaining the P5 initial binding mode, as the hydrophobic contacts engaged by the Leu3 isobutyl group with the hydrophobic crevice sized by the PCSK9 residues Leu159, Pro156, Ala240, and Ile370 were missing.
The alanine mutation of Leu1 and Lys5 led to peptides with a calculated binding affinity that was slightly lower than that of P5 (∆∆G* close +5 kcal/mol). However, given the inaccuracy of the MM-GBSA calculations and the observation that the side chains of Leu1 and Lys5 fluctuating in a solvent environment do not stably bind to PCSK9 during the MD simulations, positions 1 and 5 cannot be considered strong hotspots similar to positions 3 and 6. Conversely, the substitution with alanine of Ile2, Ser7, and Asp10 of P5 led to peptides with a predicted binding affinity close to that predicted for the template peptide. Therefore, they can be considered non-hotspots and can potentially be substituted with different amino acids. However, the predicted data on Leu1 and Ile2 are in accordance with our recent experimental data, which show that a metabolite of peptide P5 that does not contain the first two residues (P5-met, LPKHSDAD) displays an IC 50 value close to the parent peptide P5 [24].
Conversely, the P4A and D8A mutant peptides showed a higher affinity to PCSK9 than P5. However, as the gain in the ∆G* value was not extremely high, the synthesis and biological evaluation of these peptides was not considered suitable.
Design of P5 Analogs with Improved PCSK9 Predicted Affinity
The alanine-scanning study showed that the positions 1, 2, 7, and 10 on the P5 sequence could be considered non-hotspots. Moreover, the alanine in position 9 should be considered a non-hotspot, as alanine is already present in the natural P5 sequence. Nevertheless, the P5-Ser7 OH group could create an H-bond with PCSK9-Asp367, and the P5-Asp10 side chain could be involved in the fold of the peptide, as an internal H-bond could be shaped with the side chain of P5-Ser7. Thus, we decided to mutate the residues in positions 2 and 9 to develop novel P5 analogs with improved PCSK9 binding affinity. Accordingly, with this assumption, 20 2 peptide P5 analogs were computationally designed through the systematic substitution of positions 2 and 9 with all natural amino acids. Their theoretical affinity for PCSK9 was preliminary evaluated by the Prime algorithm (Maestro, release 2019-4), which can estimate the peptide binding free energy using the MM-GBSA approach. PCSK9 in complex with the 10 top-ranking P5 analogs (i.e., those with the lowest ∆Affinity values, Table 2) again underwent MD simulations by applying the previously described AMBER20 MD protocol. The ∆G* values were estimated using the MM-GBSA protocol (Table 2), which allowed for the acquisition of ∆G* values comparable with those previously attained for P5 and other P5 alanine mutants.
Peptide
Sequence The Prime calculations (third column of Table 2) suggested that the peptides acquiring an improved predicted binding energy were the ones containing arginine in position 9. At variance, the substitutions in position 2 did not considerably affect the affinity of the resulting peptides (differences in the ∆Affinity values followed in the range of 5 kcal/mol). Subsequently, using the AMBER20/MM-GBSA calculations, the resulting ∆G* values spanned from −19 to −42 kcal/mol. This allowed us to assess that the peptide [I2Y-A9R]P5 (i.e., P5-Best) was endowed with the highest predicted PCSK9 binding affinity (see Figure S3 in the Supporting Information for the RMSD plots). In fact, the ∆G* value of P5-Best was two times the value predicted for the template peptide P5, suggesting that P5-Best could show an affinity to PCSK9 appreciably lower than P5. Our simulations showed that, as indicated in the conformation representative of the most populated cluster (Figure 2), P5-Best could acquire an ameliorated PCSK9 complementarity because of the possibility of creating two salt bridges: the first between the new arginine in position 9 and the side chains of Glu366 and Asp367, and the second between the side chain of P5-Best-Asp10 and the side chain of Lys222. These interactions were also enforced by the presence of an H-bond between the P5-Best-Asp10 and the OH group of Ser225. Moreover, the P5-Best-Ile3 side chain was in contact with the hydrophobic pocket sized by Phe379, while the phenol ring of the new residue P5-Best-Tyr2 was located close to Arg194. The NH groups of P5-Best-Tyr2 and -Ile3 created two H-bonds with the side chain of Asp238. and the side chain of Lys222. These interactions were also enforced by the presence of an H-bond between the P5-Best-Asp10 and the OH group of Ser225. Moreover, the P5-Best-Ile3 side chain was in contact with the hydrophobic pocket sized by Phe379, while the phenol ring of the new residue P5-Best-Tyr2 was located close to Arg194. The NH groups of P5-Best-Tyr2 and -Ile3 created two H-bonds with the side chain of Asp238. These results were also compared to the computational data attained designing the poly-imidazole derivatives capable of inhibiting PCSK9 [17]. In fact, in our previous paper, by applying a computational approach such as the one here applied, we designed and biologically evaluated two poly-imidazole derivatives endowed with PCSK9 inhibiting activity. The biological evaluation of the most interesting poly-imidazoles, named Rim13 and Rim14, allowed us to report on their ability to modulate the LDLR expression on the human hepatic HepG2 cell surface, and their capacity to increase the extracellular uptake of LDL by the same cells. Here, structurally aligning the P5-Best and the Rim13 hypothetical binding modes, we noted that the backbone atoms of the peptide residues Pro4 and Lys5 were mimicked by the first two imidazole rings of Rim13 ( Figure 2B). Moreover, the benzyl chain of the second imidazole ring of Rim13 was projected in the same hydrophobic cleft shaped by Phe379 and occupied by the side chain of P5-Best-Leu3, creating van der Waals interactions. Furthermore, the negatively charged area created by the PCSK9 residues Asp367 and Glu366 were in contact with the side chain of the P5-Best-Arg9 and the amino-methyl chain of Rim13. Since they bind similarly, creating contacts with the same PCSK9 residues, this alignment could help in the design of new poly-imidazole derivatives. In fact, aiming at designing more potent poly-imidazoles derivatives, the benzyl moiety of Rim13 could be substituted by alkyl chains (linear or not), to reproduce the interactions played by the P5-Best-Leu3 residue. Conversely, regarding the design of new P5 analogs, the Pro4 of P5-Best could be replaced by aromatic residues such as Phe, Tyr or Trp, in order to reproduce the interactions played by the p-methoxyphenyl ring of Rim13. However, the oral PK properties of peptides remains strongly limited by the presence of degrading enzymes in the gastrointestinal tract. Nevertheless, the research efforts are still devoted to solve this limitation. In fact, active peptides could be orally administered together with penetration enhancers, within hydrogels or in combination with digestive enzyme inhibitors. As alternative, they can be suitably coated by acid-stable polymers or administered through intestinal patches [42]. By means of one of these innovative delivery strategies, even peptides active in the high micromolar range could be successfully employed for the treatment of several pathologies. Actually, numerous peptides are in phase III of clinical trials but, until now, only desmopressin is available in the market, and used in the clinic [42]. These results were also compared to the computational data attained designing the poly-imidazole derivatives capable of inhibiting PCSK9 [17]. In fact, in our previous paper, by applying a computational approach such as the one here applied, we designed and biologically evaluated two poly-imidazole derivatives endowed with PCSK9 inhibiting activity. The biological evaluation of the most interesting poly-imidazoles, named Rim13 and Rim14, allowed us to report on their ability to modulate the LDLR expression on the human hepatic HepG2 cell surface, and their capacity to increase the extracellular uptake of LDL by the same cells. Here, structurally aligning the P5-Best and the Rim13 hypothetical binding modes, we noted that the backbone atoms of the peptide residues Pro4 and Lys5 were mimicked by the first two imidazole rings of Rim13 ( Figure 2B). Moreover, the benzyl chain of the second imidazole ring of Rim13 was projected in the same hydrophobic cleft shaped by Phe379 and occupied by the side chain of P5-Best-Leu3, creating van der Waals interactions. Furthermore, the negatively charged area created by the PCSK9 residues Asp367 and Glu366 were in contact with the side chain of the P5-Best-Arg9 and the amino-methyl chain of Rim13. Since they bind similarly, creating contacts with the same PCSK9 residues, this alignment could help in the design of new poly-imidazole derivatives. In fact, aiming at designing more potent poly-imidazoles derivatives, the benzyl moiety of Rim13 could be substituted by alkyl chains (linear or not), to reproduce the interactions played by the P5-Best-Leu3 residue. Conversely, regarding the design of new P5 analogs, the Pro4 of P5-Best could be replaced by aromatic residues such as Phe, Tyr or Trp, in order to reproduce the interactions played by the p-methoxyphenyl ring of Rim13. However, the oral PK properties of peptides remains strongly limited by the presence of degrading enzymes in the gastrointestinal tract. Nevertheless, the research efforts are still devoted to solve this limitation. In fact, active peptides could be orally administered together with penetration enhancers, within hydrogels or in combination with digestive enzyme inhibitors. As alternative, they can be suitably coated by acid-stable polymers or administered through intestinal patches [42]. By means of one of these innovative delivery strategies, even peptides active in the high micromolar range could be successfully employed for the treatment of several pathologies. Actually, numerous peptides are in phase III of clinical trials but, until now, only desmopressin is available in the market, and used in the clinic [42].
Experimental Validation of the Computational Predictions
In light of these theoretical studies, empirical assays were performed on the [H6A] peptide P5 (i.e., P5-H6A) because position 6 was recognized as a hotspot (Table 1), on P5-Best because of its lowest predicted binding free energy value, and on [S7A]P5 (i.e., P5-S7A) because it represents one of the peptides for which the alanine mutation did not remarkably alter the predicted binding free energy value. Conversely, the mutation of P5-Asp10 into alanine affected peptide folding (as shown by MD simulations) and the water solubility of the peptide, as the negatively charged side chain of Asp10 should be substituted with the aliphatic methyl group of alanine. Thus, the peptides P5-H6A, P5-S7A, and P5-Best were purchased by GenScript and biochemically evaluated by in vitro experiments.
P5 Analogs Impair the PPI between PCSK9 and LDLR
To verify whether the P5 derivatives could impair the PPI between PCSK9 and LDLR, dedicated biochemical experiments were assessed. The results showed that P5-Best, P5-H6A and P5-S7A reduced the PCSK9-LDLR binding with a dose-response trend and IC 50 values of 0.7, 9.0, and 1.45 µM, respectively ( Figure 3A). The results confirmed that two of the new P5 derivatives were more active than P5 (1.6 µM). These data are in line with the computational predictions. In fact, the peptides ∆G* value calculations indicated that the most active peptide could be the double mutant peptide P5-Best (∆G= −41.7 kcal/mol) while, among other ones, P5-S7A should display binding affinity in the range of P5 (∆G* values of −19.3 and −18.9 kcal/mol, respectively), and P5-H6A should be do not active, since by our predictions, the side chain of H6 plays a crucial role in the stabilization of the peptide on the PCSK9 surface. Nevertheless, it has to be stated that a higher affinity should be expected for P5-Best, since the calculated ∆G* value calculated for this peptide was double than that of P5. In our opinion, the lack of linearly between the binding affinity experimental data (IC 50 values) and the computational predictions, could be due to the omitted calculation of the entropic contribution to binding free energy value. In fact, the calculation of this contribution is highly computationally demanding, and the error associated with the estimation is very often greater than the value itself. Moreover, the data attained by the further biological investigations on these peptides cannot be compared with the computational predictions, since it is very difficult to discuss the in silico results in comparison with the biological data obtained from the HepG2 cells. In fact, the molecular modeling studies have been performed on a PCSK9 model immersed in a box of water molecules and the biological experiment capable of reproducing these conditions is only the one in which the recombinant PCSK9 is in contact with the LDLR, i.e., the binding assays displayed in Figure 3. Conversely, when the biological properties of peptides are assessed in complex experimental conditions, such as the one in which cells are involved, the molecular modeling results cannot be linearly compared with the experimental data. In fact, the effects of membranes, extracellular or intracellular enzymes cannot be considered by our calculations.
Furthermore, the ability of these P5 analogs to modulate the levels of LDLR localized on HepG2 surfaces was investigated in the presence of PCSK9 (4 µg/mL) using an in-cell western (ICW) assay. The results showed that the LDLR levels decreased in the presence of PCSK9 alone by 25.4 ± 1.6% (p < 0.001) compared with the untreated control cells, and that P5-Best, P5-H6A, and P5-S7A could significantly restore the LDLR levels to 96.9 ± 10.4%, 93.4 ± 5.2%, and 104.4 ± 0.4% (p < 0.001), respectively, when co-incubated with PCSK9 ( Figure 3B). Finally, functional cell-based assays were performed to investigate the ability of HepG2 cells to uptake extracellular LDL in the presence of PCSK9. HepG2 cells incubated with PCSK9 alone showed a 43.6 ± 9.6% (p < 0.05) reduction in the uptake of fluorescent LDL compared with the untreated control cells. This result is in agreement with the reduction of active LDLR population on the cell surface, which was observed by ICW. After co-incubation with PCSK9, P5-Best, P5-H6A, and P5-S7A completely restored the LDLR function, increasing the LDL uptake to 129.2 ± 21.9% (p < 0.001), 107.4 ± 23.0% (p < 0.05), and 125.4 ± 19.0% (p < 0.01) ( Figure 3C), respectively.
P5 analogs demonstrated to be more active than peptide Pep2-8 (TVFTSWEEYLDWV) [ [44], clearly indicating that P5-Best is about 30-and 20-fold more potent than the Pep2-8 mutant peptides. In addition, at cellular levels, Pep2-8 and both Pep2-8 analogs were less efficient than P5 analogs to restore the LDLR protein levels and the functional ability of hepatocytes to absorb LDL from the extracellular environment [44]. On the contrary, P9-38, a cyclized Pep2-8 analogue, demonstrated to be 35-fold more potent than P5-Best in impairing the PPI between PCSK9/LDLR displaying and IC 50 equals to 20 nM and it was 1000-fold more potent to restore the LDLR level and functionality in HepG2 cells [45].
Finally, P5-Best is slightly more potent than the poly-imidazole Rim13 which inhibit the interaction between PCSK9 and LDLR by an IC 50 equals to 1.4 µM, a value similar to the reference peptide P5. In the same range of concentration of Rim13, P5 analogs successes in the restoring the functional activity of LDLRs on the surface of hepatocytes preventing their degradation [45].
Although all P5 analogs successfully restored the level of LDLR protein similar to peptide P5, statistical analysis revealed that from a functional point of view, both P5-Best and P5-S7A not only restored the ability of hepatic cells to uptake LDL from the extracellular environment but also improved this capability against untreated cells. These results suggest that the hypocholesterolemic effect occurs with a dual mechanism of action involving the modulation of HMG-CoAR activity and protein levels. To assess this aspect and deepen this behavior, further HMG-CoAR activity assay and western blot experiments were performed.
Finally, P5-Best is slightly more potent than the poly-imidazole Rim13 which inhibit the interaction between PCSK9 and LDLR by an IC50 equals to 1.4 µM, a value similar to the reference peptide P5. In the same range of concentration of Rim13, P5 analogs successes in the restoring the functional activity of LDLRs on the surface of hepatocytes preventing their degradation [45].
Although all P5 analogs successfully restored the level of LDLR protein similar to peptide P5, statistical analysis revealed that from a functional point of view, both P5-Best and P5-S7A not only restored the ability of hepatic cells to uptake LDL from the extracellular environment but also improved this capability against untreated cells. These results suggest that the hypocholesterolemic effect occurs with a dual mechanism of action involving the modulation of HMG-CoAR activity and protein levels. To assess this aspect and deepen this behavior, further HMG-CoAR activity assay and western blot experiments were performed. The decreased functional ability of HepG2 cells to absorb LDL from the extracellular space observed after incubation with PCSK9 (4 µg/mL) improved after treatment with P5 and P5 analogs (50 µM). The data points represent the average ± SD of four independent experiments performed in duplicate. Data were analyzed using one-way ANOVA, followed by Tukey's post-hoc test; (*) p < 0.05, (**) p < 0.01, and (***) p < 0.001. C: control sample.
P5 Analogs Modulate the Hepatic PCSK9 Pathway
Although the ability to reduce the secretion of mature PCSK9 was weaker than that of P5, P5-Best, P5-H6A, and P5-S7A could also induce a slight reduction by 2.7 ± 1.9%, 7.4 ± 2.6%, and 5.1 ± 1.7%, respectively ( Figure 5C). These results agree with the behavior of another naturally peptide from soybean β-Conglycinin [49]. In particular, YVVNPDN-NEN peptide (at 250 µM) reduces the PCSK9 protein level and its secretion via the modulation of HNF1-α [49]. Our results suggest that lupin P5 and its new analogs, being active at 50 µM in the hepatic cells, are 5-fold more potent than YVVNPDNNEN. In addition, it was also demonstrated that YVVNPDNNEN it is not a dual inhibitor peptide since being able to inhibit the HMG-CoAR activity [50] but not the PPI between PCSK9 and the LDLR [49]. Even though, P5 and P5 analogs are less active than statins as HMG-CoAR inhbitors and their clinical implication is still too far, they display the unique feature to inhibit both HMG-CoAR and PCSK9 targets, making them lead compounds for developing new peptidomimetic and/or small molecules endorsed by improved activity on both targets involved in the control of the circulating cholesterol level.
Docking of P5-S7A and MD Simulations on HMG-CoAR
The experimental assays on the purchased peptides highlighted the improvement in the dual inhibitory activity of the P5 mutant peptides. Specifically, P5-S7A showed the lowest IC 50 value for HMG-CoAR. Thus, docking and MD simulations were conducted to acquire atomistic details on the putative binding mode of P5-S7A in complex with HMG-CoAR. This study can pave the way for the design of more dual-active peptides. P5-S7A was docked to the statin binding site of HMG-CoAR using Glide (see the Experimental Section for details), and the best docking pose (gscore = −9.881 kcal/mol) was selected for further 500 ns-long MD simulations in explicit water solvent (see Figure S4 in the Supporting Information for the RMSD plots). As the enzyme was in the dimeric state, the statin present in the other binding sites was not deleted (see the Experimental Section for details) to preserve the overall folding of the simulating system. At the end of the MD simulations, the root mean square deviation (RMSD) of the peptide was analyzed, and the peptide conformations sampled during the MD production run were clustered using the average-linkage method, which was previously prescribed for the PCSK9/peptide complexes. The results showed that only one cluster was mainly populated, representing 73.1% of the peptide conformations. The structure representative of this cluster is depicted in Figure 7. This HMGCoAR/P5-S7A complex showed the presence of a H-bond network between the P5-S7A-Ala7 and -Ala9 backbone atoms, with a side chain of HMG-CoAR-Asn658. P5-S7A-Leu3 projected its side chain in a small hydrophobic pocket sized by HMGCoAR-Leu853, -Ala856, and -Leu857. Interestingly, the presence of an intramolecular H-bond between the side chains of P5-S7A-Lys5 and -Asp10 improved the overall peptide conformational stability. Moreover, the supposed binding mode of P5-S7A was consistent with the binding affinity data, indicating that the IC50 of P5-H6A on HMG-CoAR was close to that of P5-S7A. Both residues could point their side chains to an effectively empty pocket sized by HMG-CoAR-Leu853, -Ala856, and -Leu857, which did not create any interactions with the HMG-CoAR counterpart. Thus, the substitution of positions 6 and 7 with alanine did not elicit any strong variation in the experimental binding affinity. This hypothesis paves the way for the design of new P5 analogs in which positions 6 and 7 can be mutated by unnatural amino acids capable of creating stronger interactions with HMG-CoAR.
The binding mode supposed for P5-S7A was then compared to the one of P5 in complex with HMG-CoAR, to understand the possible reasons on the base of the improved binding affinity displayed by the mutant peptide. In our previous article [21] we have reported on the results of docking calculations of P5. Here, performing MD simulations starting from the P5 docking pose (see Figure S5 in the Supporting Information for the RMSD plots), we noted that, in the complex conformation representative of the most populated cluster (70%), P5 adopted a binding mode in which the side chain of P5-S7 created two intramolecular H-bonds with the NH groups of P5-A9 and P5-D10 (Figure 8). In the mutant peptide P5-S7A, these internal bonds cannot be created for the absence of the OH group in position 7. This, in our opinion, led to a peptide endowed with an increased conformational freedom, leaving the C-terminal residues to adopt a cyclic conformation in which an internal salt bridge can be shaped between the side chains of P5-K5 and P5-D10. This conformation could be more prone to create remodeled and ameliorated interactions with the enzyme. This HMGCoAR/P5-S7A complex showed the presence of a H-bond network between the P5-S7A-Ala7 and -Ala9 backbone atoms, with a side chain of HMG-CoAR-Asn658. P5-S7A-Leu3 projected its side chain in a small hydrophobic pocket sized by HMGCoAR-Leu853, -Ala856, and -Leu857. Interestingly, the presence of an intramolecular H-bond between the side chains of P5-S7A-Lys5 and -Asp10 improved the overall peptide conformational stability. Moreover, the supposed binding mode of P5-S7A was consistent with the binding affinity data, indicating that the IC 50 of P5-H6A on HMG-CoAR was close to that of P5-S7A. Both residues could point their side chains to an effectively empty pocket sized by HMG-CoAR-Leu853, -Ala856, and -Leu857, which did not create any interactions with the HMG-CoAR counterpart. Thus, the substitution of positions 6 and 7 with alanine did not elicit any strong variation in the experimental binding affinity. This hypothesis paves the way for the design of new P5 analogs in which positions 6 and 7 can be mutated by unnatural amino acids capable of creating stronger interactions with HMG-CoAR.
The binding mode supposed for P5-S7A was then compared to the one of P5 in complex with HMG-CoAR, to understand the possible reasons on the base of the improved binding affinity displayed by the mutant peptide. In our previous article [21] we have reported on the results of docking calculations of P5. Here, performing MD simulations starting from the P5 docking pose (see Figure S5 in the Supporting Information for the RMSD plots), we noted that, in the complex conformation representative of the most populated cluster (70%), P5 adopted a binding mode in which the side chain of P5-S7 created two intramolecular H-bonds with the NH groups of P5-A9 and P5-D10 (Figure 8). In the mutant peptide P5-S7A, these internal bonds cannot be created for the absence of the OH group in position 7. This, in our opinion, led to a peptide endowed with an increased conformational freedom, leaving the C-terminal residues to adopt a cyclic conformation in which an internal salt bridge can be shaped between the side chains of P5-K5 and P5-D10. This conformation could be more prone to create remodeled and ameliorated interactions with the enzyme. Finally, the binding mode supposed for P5-S7A was also compared to that of atorvastatin in complex with HMG-CoAR (as reported in the PDB, accession code 1HWK [51]). The structural alignment of both complexes (Figure 9) permitted to us suppose that the first four residues of P5-S7A essentially reproduce the contact played by the three aromatic substituents of the atorvastatin pyrrole ring. In particular, the aniline is mimicked by the P5-S7A-Ile2 side chain, the P5-S7A-Leu3 was overlapped to the phenyl ring of the statin, and the p-F-phenyl ring of atorvastatin was spatially close to the P5-S7A-Pro4 ( Figure 9). Unfortunately, the remaining moiety of the peptide pointed to an enzyme area different from the one in which the 3,5-dihydroxyl-heptanoic acid moiety was bound in the HMG-CoAR/atorvastatin complex. This portion is considered essential for the biological activity of the statins and could explain the reason on the base of the low affinity displayed by the mutant peptide. More efforts should be made to design peptides capable of mimicking such interactions and occupying the HMG-CoAR pocket sized by Lys735, Ser684, Arg590, Lys 691, Asn755, and Glu559 residues ( Figure 9B). Finally, the binding mode supposed for P5-S7A was also compared to that of atorvastatin in complex with HMG-CoAR (as reported in the PDB, accession code 1HWK [51]). The structural alignment of both complexes (Figure 9) permitted to us suppose that the first four residues of P5-S7A essentially reproduce the contact played by the three aromatic substituents of the atorvastatin pyrrole ring. In particular, the aniline is mimicked by the P5-S7A-Ile2 side chain, the P5-S7A-Leu3 was overlapped to the phenyl ring of the statin, and the p-F-phenyl ring of atorvastatin was spatially close to the P5-S7A-Pro4 ( Figure 9). Unfortunately, the remaining moiety of the peptide pointed to an enzyme area different from the one in which the 3,5-dihydroxyl-heptanoic acid moiety was bound in the HMG-CoAR/atorvastatin complex. This portion is considered essential for the biological activity of the statins and could explain the reason on the base of the low affinity displayed by the mutant peptide. More efforts should be made to design peptides capable of mimicking such interactions and occupying the HMG-CoAR pocket sized by Lys735, Ser684, Arg590, Lys 691, Asn755, and Glu559 residues ( Figure 9B). Finally, the binding mode supposed for P5-S7A was also compared to that of atorvastatin in complex with HMG-CoAR (as reported in the PDB, accession code 1HWK [51]). The structural alignment of both complexes ( Figure 9) permitted to us suppose that the first four residues of P5-S7A essentially reproduce the contact played by the three aromatic substituents of the atorvastatin pyrrole ring. In particular, the aniline is mimicked by the P5-S7A-Ile2 side chain, the P5-S7A-Leu3 was overlapped to the phenyl ring of the statin, and the p-F-phenyl ring of atorvastatin was spatially close to the P5-S7A-Pro4 ( Figure 9). Unfortunately, the remaining moiety of the peptide pointed to an enzyme area different from the one in which the 3,5-dihydroxyl-heptanoic acid moiety was bound in the HMG-CoAR/atorvastatin complex. This portion is considered essential for the biological activity of the statins and could explain the reason on the base of the low affinity displayed by the mutant peptide. More efforts should be made to design peptides capable of mimicking such interactions and occupying the HMG-CoAR pocket sized by Lys735, Ser684, Arg590, Lys 691, Asn755, and Glu559 residues ( Figure 9B).
Conclusions
In this study, using promising data on the dual hypocholesterolemic activity of the lupin peptide P5, we computationally designed new analogs endowed with improved PCSK9 and HMG-CoAR inhibitory activities. After the computational alanine-scanning mutational analysis, the non-hotspot residue of P5 was suitably substituted with other amino acids capable of improving the complementarity between PCSK9 and the peptide. Therefore, using our "affinity maturation protocol", we selected P5-Best, P5-H6A, and P5-S7A peptides for the experimental assays. The attained experimental data confirmed the theoretical studies, revealing that the affinity of the mutant P5-H6A peptide on PCSK9 was reduced almost seven times (IC 50 = 9.0 µM), whereas the affinity of P5-S7A was slightly higher than that of P5 (IC 50 = 1.45 µM). Remarkably, the mutant peptide P5-Best showed the lowest PCSK9 IC 50 value of 0.7 µM. Further biological assays demonstrated that all mutant peptides that maintained the dual PCSK9/HMG-CoAR inhibitory activity improved the ability of HepG2 cells to absorb extracellular LDL by up to 254% (P5-Best data). Doubtless, peptide P5 and its analogs display activity in the micromolar range suggesting that still their exploitation in the clinical application is challenging. Therefore, more efforts have to be pursued in order to improve their dual-inhibitory activity. However, evidences support the fact that P5 and its analogs can be considered as promising lead compounds for the development of a new class of hypocholesterolemic drugs endowed with dual-inhibitory activity of both PCSK9 and HMG-CoAR targets. Indeed, the dual and synergistic activity may be useful for better achieving the biological effect than compound actives on one of those targets.
Further experiments will be performed to evaluate the intestinal stability and propensity of P5 analogs to be trans-epithelial transported by mature Caco-2 cells. Experiments will be performed using the parent peptide P5 and the natural intestinal metabolite P5-met as positive controls. This study confirms that a multidisciplinary approach in the design of new peptides is successful in identifying peptides endowed with hypocholesterolemic effects, offering a promising starting point for the design of peptidomimetics that lack the bioavailability problems of peptides. | 11,453.4 | 2022-03-01T00:00:00.000 | [
"Chemistry",
"Medicine",
"Biology"
] |
Chromatic Aberration Correction in Harmonic Diffractive Lenses Based on Compressed Sensing Encoding Imaging
Large-aperture, lightweight, and high-resolution imaging are hallmarks of major optical systems. To eliminate aberrations, traditional systems are often bulky and complex, whereas the small volume and light weight of diffractive lenses position them as potential substitutes. However, their inherent diffraction mechanism leads to severe dispersion, which limits their application in wide spectral bands. Addressing the dispersion issue in diffractive lenses, we propose a chromatic aberration correction algorithm based on compressed sensing. Utilizing the diffractive lens’s focusing ability at the reference wavelength and its degradation performance at other wavelengths, we employ compressed sensing to reconstruct images from incomplete image information. In this work, we design a harmonic diffractive lens with a diffractive order of M=150, an aperture of 40 mm, a focal length f0=320 mm, a reference wavelength λ0=550 nm, a wavelength range of 500–800 nm, and 7 annular zones. Through algorithmic recovery, we achieve clear imaging in the visible spectrum, with a peak signal-to-noise ratio (PSNR) of 22.85 dB, a correlation coefficient of 0.9596, and a root mean square error (RMSE) of 0.02, verifying the algorithm’s effectiveness.
Introduction
Traditional optical systems, designed to eliminate aberrations with multiple lenses, are complex, bulky, and expensive, failing to meet weight requirements [1].Diffractive lenses, with their micrometer-scale thickness, offer advantages of ultra-thinness and light weight.A single diffractive lens can intricately control the light field, holding the potential to replace traditional refractive and reflective systems.However, their inherent diffraction mechanism leads to significant chromatic dispersion, limiting high-precision wide-spectrum imaging applications [2].Traditional solutions involve adding a reverse power diffractive lens to correct chromatic aberrations or designing multi-layered diffractive lens structures to enhance efficiency [3].In addition to improvements to element structures, chromatic aberration correction can be achieved through image processing algorithms, which is a central concept in computational imaging [4].In computational imaging systems, the optical system can be incomplete, and high-quality images can be recovered from system-captured data through image reconstruction algorithms.In recent decades, computational imaging technology has been applied in various fields, such as single-pixel imaging [5][6][7], structured light 3D imaging [8], lensless imaging [9][10][11], coded imaging [12,13], and hyperspectral imaging [14][15][16][17], becoming a research hotspot in the field of optical imaging.Introducing computational imaging technology into diffractive lens systems significantly enhances optical system design freedom and simplifies system structures.Nikonorov et al. proposed a three-channel chromatic aberration correction algorithm, blurring and sharpening deblurred images in other channels for color correction and reconstructing Fresnel lens imaging results, but the recovered images still exhibited significant noise [18].Peng et al. used a particle swarm algorithm to optimize the diffractive lens height map, reconstructing images based on cross-channel image priors, but the low diffraction efficiency resulted in foggy images [16].Sitzmann et al. first introduced the framework for joint design of optical algorithms, obtaining the imaging data of optical systems through simulation [19].This approach combines deep learning with backend image restoration algorithms, colloquially known as the end-to-end design framework.This framework has pioneered a new paradigm in computational imaging design.Since this introduction, there has been an abundance of related work, including multispectral imaging [20,21], depth estimation [22,23], and large field imaging [24].Although the end-to-end design framework has achieved breakthroughs in optical device performance compared to traditional design methods, it still faces several challenges.These include dependency on datasets, the need for high computational power, and the inability to design large-aperture diffractive optical elements (to our knowledge, there are no diffractive lenses with an aperture larger than 2 cm currently available).Therefore, our work continues to employ the traditional approach of separating the design of optical components from backend algorithms.At the same time, the end-to-end design framework is limited to the high-cost photolithography process for fabricating diffractive optical elements.In our work, we opt for a cost-effective and straightforward approach by employing turning machining for the processing of optical components.Traditional image restoration algorithms include point spread function-based deblurring algorithms like Lucy-Richardson [25], cross-channel non-blind deconvolution [26], and cross-channel nonblind convex optimization deconvolution based on estimated point spread functions [18].Diffractive lenses have undergone substantial evolution, transitioning from simple diffractive optical elements to sophisticated harmonic diffractive lenses.Harmonic diffractive lenses offer significant advantages over traditional diffractive lenses, including improved chromatic aberration control, higher diffraction efficiency, broader bandwidth operation, better system integration, increased manufacturing flexibility, and enhanced customization capabilities.Harmonic optical elements have interesting and useful optical properties, and they may be used for lightweight optical components in future space telescopes [27], remote sensing [28], and other applications [29].
In this work, we consider chromatic aberration as a result of different focal points across various spectral bands.For simplicity of discussion, we present a schematic diagram showing light of three different wavelengths (RGB) converging at different focal points after passing through the same diffractive lens as show in Figure 1.By performing fullfocus restoration on individual bands and then merging them, we obtain an image with chromatic aberration correction and propose a novel image restoration method based on compressed sensing for chromatic aberration correction.The overall workflow is shown in Figure 2: First, we design a 150th-order harmonic diffractive lens based on gradient descent [28], conduct fabrication experiments, capture images, perform reconstruction based on three channels, focus the designed wavelength channel image, utilize compressed sensing for reconstruction in other incomplete channels, and successfully correct chromatic aberrations through simple iteration.We conducted both infield and outfield experiments; infield experiments involved displaying true images on a monitor, capturing these with a prototype system, and reconstructing the images to facilitate quantitative evaluation of the restoration results, such as PSNR.Outfield experiments entailed direct capture of natural landscapes for reconstruction, with the quality of restoration assessed solely through visual inspection by human observers.We designed a 150th-order harmonic diffractive lens based on the gradient descent method.Given a randomly generated initial structure of a diffractive lens, we obtained the PSF of the current diffractive lens structure.Our goal was to make the PSF converge as closely as possible to a single point.We calculated the error between the current PSF and the target PSF, adjusting the structure of the diffractive lens using the gradient descent algorithm.The final optimized lens was then fabricated, a prototype system was assembled, and the image was captured.Utilizing the proposed image recovery algorithm based on compressed sensing, we achieved the effect of chromatic aberration correction.
Diffractive Lens Imaging Model
Diffractive lenses, as an integral component of modern optical engineering, exhibit two distinctive characteristics that set them apart from their refractive counterparts, namely multi-level diffraction and wavelength sensitivity, often referred to as dispersion, as illustrated in Figure 1 left.These features are not merely incidental but are fundamental to the operational principles and applications of diffractive optics.Multi-level diffraction, a hallmark of diffractive lenses, arises from their unique physical structure.Unlike traditional lenses, which rely on the continuous curvature of their surfaces to bend light, diffractive lenses achieve focus through the constructive and destructive interference of light waves.This is facilitated by the lens's surface, which is etched or molded into multiple discrete levels or steps.Each level corresponds to a specific phase shift, orchestrating the light waves to converge at the focal point.This multi-level approach allows diffractive lenses to precisely control the phase of incoming light, enabling them to focus light efficiently and with a high degree of flexibility in design.As such, diffractive lenses can be engineered to achieve specific optical functions that would be challenging or impossible to accomplish with conventional refractive lenses.Wavelength sensitivity, or dispersion, is another critical aspect of diffractive lenses.This characteristic stems from the way diffractive lenses manipulate light, which is inherently dependent on the wavelength of the incident light.The diffraction efficiency of a lens-its ability to direct light towards the desired focal point-varies with the wavelength, leading to a phenomenon where different wavelengths are focused at slightly different positions.This dispersion effect can be a double-edged sword.On the one hand, it allows for the design of lenses that can selectively focus or filter light based on wavelength, which is advantageous for applications such as chromatic correction and spectral imaging.On the other hand, it necessitates careful design to mitigate unwanted chromatic aberrations in applications where uniform focus across a broad spectrum of wavelengths is desired.The interplay between multi-level diffraction and wavelength sensitivity defines the operational envelope and design considerations for diffractive lenses.These characteristics enable the lenses to be highly compact and lightweight, offer unique dispersion properties, and achieve high focusing efficiencies.However, they also pose challenges, particularly in terms of managing dispersion and designing for broadband applications.Advances in computational design and fabrication technologies continue to push the boundaries of what is possible with diffractive optics, enabling increasingly sophisticated optical devices that leverage the unique advantages of multi-level diffraction and wavelength sensitivity.In our work, we primarily address the dispersion issue with a post-process algorithm.According to scalar diffraction theory [30], the point spread function (PSF) can be expressed as: where A is the amplitude constant, z i is the distance from the lens to the imaging plane, (u, v) are the coordinates on the lens plane, and P(u, v; λ) is the pupil function.The pupil function for a diffractive lens is: Circ(•) is the circular aperture function, and Φ(u, v) is the phase delay introduced at each point after passing through the lens.In our work, we acquire the desired imaging effects and PSF by optimizing the Φ(u, v) term.In Fourier optics, the incoherent imaging model is viewed as the convolution process of the optical system input i(x, y; λ) with the PSF: Here, ⊗ denotes the two-dimensional convolution operation, and A(i(x, y; λ)) represents the chromatic aberration-affected image.For traditional diffractive lenses, the PSF p(x, y; λ) heavily depends on the wavelength, leading to significant dispersion.This wavelength dependence causes different focal lengths f for different wavelengths λ.For a diffractive lens designed for wavelength λ 0 and focal length f 0 , the focal length for other wavelengths satisfies: Ideal imaging channels the camera capture through image sensors.Image sensors vary in sensitivity to different wavelengths, requiring the convolution-derived image to be multiplied by a spectral response function, which is an intrinsic characteristic of the sensor.Generally, cameras capture RGB images (the central wavelengths of RGB are 640 nm, 550 nm, and 460 nm, respectively, with the total coverage range including the wavelength band of our designed diffractive lens, which is 500-800 nm), and our post-process algorithm is also based on RGB images.Therefore, in designing the diffractive lens, the wavelength range covers each color channel in RGB.In this case, the continuous PSF can be discretized into three channels.The entire forward imaging model can be written as: In our work, we designed a diffractive lens with a reference wavelength for the G channel.For the R and B channels, severe image degradation occurs, resulting in greentinted captured images.The image captured on the reference focal plane is decomposed into R, G, B channels.From a single-channel perspective, dispersion can be understood as incomplete image information in the captured R and B channels, presenting a degraded effect.Retrieving the original image is a process of recovering complete information from limited data.Hence, we employ compressed sensing theory to recover individual channels and then superimpose the recovered channels to achieve chromatic aberration correction.
Harmonic Diffractive Lens Design
In the realm of scalar diffraction theory, diffractive lenses are traditionally modeled as phase masks, an approach that simplifies the interaction of light with the lens's microstructured surface.This conventional model is predicated on the manipulation of the phase of light passing through the lens, employing a first-order diffractive surface to achieve the desired optical effects.In contrast, harmonic diffractive lenses represent a sophisticated advancement in this domain, characterized by their employment of a diffractive order M that exceeds unity.This higher-order approach allows for the modulation of light with greater finesse, resulting in a lens that can exhibit both diffractive and refractive properties.The structural distinction between harmonic diffractive lenses and their traditional counterparts is primarily attributed to the phase depth factor associated with the former.As M increases, the microstructure of the lens surface becomes more pronounced, reducing the number of annular zones required to achieve a specific optical effect.This relationship is illustrated in Figure 1 right, where the varying M values manifest in distinct lens profiles.Notably, at higher values of M, the harmonic diffractive lens increasingly resembles a traditional refractive lens in form, albeit with a complex microstructure that presents significant manufacturing challenges.Despite these challenges, the harmonic diffractive lens boasts a notable advantage in terms of diffraction efficiency.It is capable of achieving theoretical 100% efficiency at the design wavelength and at multiple harmonic wavelengths, surpassing the performance of traditional diffractive lenses, particularly in applications requiring wideband imaging.This efficiency is achieved through the strategic manipulation of the lens's phase profile, as described by the phase compression formula: where ϕ lens is the continuous phase of the refractive lens, and ϕ doe is the compressed phase, with mod representing the modulo operation of ϕ lens with 2Mπ.Based on the optical path difference formula, the corresponding sag height of the harmonic diffractive lens is: where λ 0 is the design wavelength.
In the advanced domain of optical engineering, diffractive lenses stand out for their ability to precisely manipulate light, offering innovative solutions for a wide range of applications from imaging systems to laser focusing devices.The fabrication of these lenses involves sophisticated techniques, predominantly photolithography and diamond turning, each with its own set of advantages and constraints.Photolithography, while precise, necessitates the discretization of the lens's continuous height profile into multiple steps, thereby inflating the production costs and complexity.Conversely, diamond turning, a method we have employed in our work, leverages direct machining to achieve the desired surface profile, requiring detailed parameterization of the lens design for accurate fabrication.Our focus is on the development of a harmonic diffractive lens, distinguished by its diffractive order M, and on optimizing the harmonic diffractive lens's continuous surface in order to enhance its optical performance.The key characteristics of this lens are its ability to achieve near-perfect diffraction efficiency at the design wavelength and its harmonics.The design process for such a lens necessitates a comprehensive understanding of its geometric and optical properties, starting with the determination of the maximum radius R max of the annular zones, which is essential for defining the lens's aperture.The formula for R max is given by: where N represents the number of annular zones, λ 0 the design wavelength, f 0 the focal length, and M the diffractive order.This equation is pivotal for laying out the spatial arrangement of the annular zones, which are integral to the lens's functionality.Following the spatial definition, the structural height parameter H doe for each annular zone is calculated.This parameter not only influences the focusing ability of the lens but also its efficiency in light manipulation across different wavelengths.The structural height parameter H doe is determined by the equation: where c i , z o f f i represent the curvature and axial offset of the ith annular zone, respectively, and k is the conic coefficient.
In our design endeavor, we aimed to fabricate a harmonic diffractive lens with a diffractive order of M = 150 featuring an aperture of 40 mm, focal length f 0 = 320 mm, and design wavelength λ 0 = 550 nm and comprising N = 7 annular zones.This design was meticulously optimized over 200 iterations using a gradient descent method, a process that underscored the intricacies involved in balancing the physical constraints with optical performance objectives.The outcome of this optimization was visually represented in the contour shown in Figure 3a, which encapsulates the nuanced surface profile necessary for achieving the lens's design goals.To assess the lens's focusing capabilities across a spectrum of wavelengths, the Strehl ratio (SR) was employed as a benchmark.The SR is a critical measure of optical performance, with a value of 80% denoting the diffraction limit, and values exceeding 95% indicating an almost aberration-free system.Our findings, depicted in Figure 4b, demonstrate that the lens exhibits superior focusing performance between 550 nm and 600 nm, in alignment with our design intentions.This performance peak, as illustrated in Figure 4a, correlates with the minimal defocus amount f + ∆ f , where ∆ f is zero, indicating optimal focusing at the design wavelength.As the wavelength diverges from this value, a shift in focal length is observed, accompanied by a reduction in diffraction efficiency, underscoring the wavelengthdependent behavior of diffractive lenses.The development of the harmonic diffractive lens, with its high diffractive order and optimized annular zone structure, represents a significant advance in optical engineering.The meticulous design and fabrication process, rooted in a deep understanding of optical physics and material science, illustrates the potential of such lenses in pushing the boundaries of what is achievable in light manipulation.This work not only contributes to the field by providing a novel lens design but also sets a precedent for future research in the pursuit of high-efficiency, wideband optical components.Through this endeavor, we have showcased the synergy between theoretical modeling, computational optimization, and precision manufacturing, highlighting the intricate balance required to translate complex optical concepts into tangible, high-performance optical devices.
Compressive Sensing
Compressed sensing (CS), a concept introduced by CANDÈS E and TAO T, leverages the sparsity of signals to reconstruct them from significantly fewer samples than required by the Nyquist sampling theorem.The core idea is that if a signal is sparse in a certain basis (i.e., most coefficients are zero or near zero), sufficient information can be captured through fewer non-adaptive linear measurements, allowing for accurate reconstruction of the original signal.A critical condition in this methodology is the restricted isometry property (RIP), expressed as: where k represents the sparsity of the signal, A is the measurement matrix, || • || 2 denotes the L2 norm, and δ k is a positive number less than 1.Matrix A is said to satisfy the RIP condition if it fulfills this inequality for all k-sparse vectors x.However, verifying the RIP condition for a given measurement matrix in practical applications presents considerable challenges.Engineers and researchers often focus more on the pragmatic aspects, such as the volume of measurement data and the sparsity level of the signal under consideration.
In the context of image recovery, two pivotal factors-sparsity and the incoherence of the measurement matrix-stand out as critical to the success of compressed sensing algorithms.
In the specialized domain of imaging systems, the work of Wu et al. [11] showcased an innovative application of CS theory.They demonstrated that in a single diffractive lens imaging system, the system's point spread function (PSF) can be effectively utilized as the measurement matrix.In a single diffractive lens system, the following imaging relationship formula exists: where y represents the system output image, x represents the source image, F and F −1 respectively represent the Fourier transform and the inverse Fourier transform, and H T represents the transfer function.Further, we can express this as y = Kx, where K = F * ΣF.
This application underscores the versatility of CS theory in adapting to various practical scenarios where the measurement matrix may arise from the physical properties of the imaging system itself.The PSF, characterizing how a system responds to a point source or point object, inherently encodes information about the system's resolution and imaging characteristics.By leveraging the PSF as the measurement matrix, the diffractive lens imaging system embodies the CS principles, enabling it to capture and reconstruct high-quality images from a number of measurements that defy conventional expectations.This approach not only highlights the adaptability of CS theory to diverse engineering challenges but also opens new avenues for enhancing imaging system performance through the strategic exploitation of signal sparsity and measurement incoherence.Our work builds upon this foundational research, applying it to the correction of chromatic aberration in single diffractive lens systems.
Algorithm Recovery Model
Image recovery stands as a cornerstone in the domain of signal processing and computational imaging, addressing the challenge of reconstructing a high-quality image from degraded observations.This task is emblematic of linear inverse problems, a category characterized by the need to invert a known linear process that has been applied to the signal or image of interest.The quintessence of solving these problems lies in the formulation of an optimization problem, where the objective is to minimize a convex function that encapsulates both fidelity to the observed data and regularization terms to impose prior knowledge or assumptions about the solution.The optimization problem can be succinctly described by the following equation: where K represents the linear operation mapping target data x to observed data y, || • || denotes the norm, λ ∈ [0, +∞[ is the regularization parameter, and Φ(•) represents the regularization method.The choice of regularization is pivotal, with common approaches including L1, L2, and total variation (TV) regularization, each suited to different aspects of image characteristics.L1 and L2 regularization focus on the magnitude of the image coefficients, promoting sparsity and smoothness, respectively.In contrast, TV regularization, especially pertinent to our work, excels in preserving edges while promoting smoothness within homogeneous regions of the image.This method proves particularly effective in the context of diffractive lens imaging, where chromatic dispersion introduces color bias and blurring, manifesting as sparsity in the gradient domain.TV regularization can be categorized into isotropic and anisotropic forms, mathematically represented as: where Formulas ( 13) and ( 14) represent isotropic and anisotropic regularization, respectively, and ∆ h i and ∆ v i are the horizontal and vertical first-order differential operators.Considering that natural image gradients are typically anisotropic and non-uniform show in Figure 5, we opt for Formula (14).The optimization objective function can be rewritten as: Combining Equations ( 11) and (15), our model can be expressed by the following formula: This formulation underscores our approach to mitigating the challenges posed by diffractive lenses, leveraging anisotropic TV regularization to counteract the color bias and blurring while preserving essential image features.By optimizing this function, we aim to achieve a balance between fidelity to the observed data and the enforcement of a priori knowledge about natural image characteristics.The result is a reconstructed image that not only closely matches the observed data but also retains the natural appearance and sharpness, despite the inherent limitations of the imaging system.This methodological framework not only exemplifies the application of linear inverse problem-solving to image recovery but also highlights the adaptability of regularization techniques to specific imaging challenges, paving the way for advancements in computational imaging and beyond.We use the TwIST algorithm [31] to optimize the objective function.Proposed by José M. et al., this algorithm improves upon IST (iterative shrinkage/thresholding), enhancing convergence rates.The algorithm employs mean square error (MSE) as an evaluation metric for recovered images, defined as: Sensor image
Original image
Gradient domain Additionally, we introduce the correlation coefficient (CC) metric.For two images A and B, the correlation coefficient is defined as:
Non-sparse
where Ā and B are the mean values of the images.For RGB images, we compute this metric separately for each channel.
Result
In the experimental setup detailed in our study, as depicted in Figure 6a, we utilized a high-resolution monitor as a means to display photographs for the purpose of image acquisition.The images captured by our imaging system, however, did not fully occupy the available frame due to the limitations inherent in aligning the digital display with the camera's field of view.The point spread function (PSF), a critical component for understanding the system's imaging capabilities, was collected using an advanced MV-CH250-90UM/C camera.This camera boasts a high resolution of 5120 × 5120 pixels, allowing for detailed capture of the PSF through a parallel light tube, a method which ensures the accuracy and consistency of the PSF data collected.Owing to the practical challenges faced in perfectly aligning the screen with the camera, the resultant captured image dimensions were constrained to 605 × 582 pixels.This limitation necessitated a strategic approach to process and utilize the PSF information effectively while managing the computational demands of the reconstruction process.To this end, we opted to crop the PSF image to a resolution of 2048 × 2048 pixels.This resolution was judiciously chosen to be significantly larger than that of the captured images, thus preserving the essential information of the PSF while facilitating a manageable computational workload.To further refine our image reconstruction process, we employed the technique of cyclic convolution.This involved padding the captured images to match the size of the PSF, thereby enabling us to perform cyclic convolution between the image and the PSF.The iterative reconstruction algorithm employed in our study was the TwIST (two-step iterative shrinkage/thresholding) algorithm.By iterating this algorithm 10 times, we aimed to strike a balance between achieving a high-quality reconstruction and maintaining computational efficiency.The ultimate recovery effect, as illustrated in Figure 7, showcases the efficacy of our methodological choices.The use of cyclic convolution, in conjunction with the TwIST algorithm, facilitated the recovery of images with remarkable clarity and detail, demonstrating the potential of our approach in overcoming the challenges posed by chromatic aberration in diffractive lenses through innovative experimental and computational strategies.
Furthermore, we conducted outdoor scene experiments, using the setup shown in Figure 3c, and compared the results with other PSF-based recovery algorithms, as shown in Figure 8.Our algorithm demonstrates several key advantages in the realm of image reconstruction, particularly when compared with traditional methods such as backpropagation and Lucy-Richardson algorithms.First, the efficiency and accuracy in handling diffraction-limited systems are notable.The algorithm can achieve higher resolution and clarity in the reconstructed images, which is critical for applications requiring fine detail and precision.Another advantage lies in the algorithm's ability to manage noise effectively.In many imaging scenarios, especially in low-light conditions or when dealing with highly scattering media, noise can significantly degrade the quality of the reconstructed image.Our algorithm incorporates advanced noise reduction techniques that maintain the fidelity of the original signal while minimizing the impact of noise, thus ensuring cleaner, more accurate reconstructions.Furthermore, the algorithm's robustness against aberrations is a significant benefit.Both the backpropagation and Lucy-Richardson algorithms exhibit a noticeable fogging effect, while our algorithm does not.We also added additional experiments to demonstrate this, as shown in Figure 9.
Captured image
Backpropagation Lucy-Richardson Our
Discussion
In this pioneering study, we embarked on the ambitious project of designing and developing a harmonic diffractive lens.This lens was meticulously engineered to operate efficiently across a broad spectrum of wavelengths, specifically from 500 to 800 nm.The foundation of our endeavor was the strategic application of the gradient descent method.This systematic approach was instrumental in optimizing the diffractive structures of the lens to ensure peak performance within the targeted wavelength range.At the core of our investigation was an in-depth analysis of the lens's focusing capabilities, particularly its ability to handle chromatic aberration-a prevalent obstacle in the realm of diffractive lens systems.Chromatic aberration, a phenomenon characterized by the differential focusing of light wavelengths leading to blurred or distorted images, was a critical focus of our work.When focusing on the target wavelength in the image plane (corresponding to the green channel in this work), other wavelengths (red and blue channels) exhibit a defocusing effect.However, since photon information can still be received, we consider this a unique form of "encoding".We approached this challenge by conceptualizing chromatic aberration as a unique form of defocusing, occurring independently across individual color channels.In response, we developed a groundbreaking image processing algorithm based on compressed sensing theory.This algorithm was meticulously crafted to reconstruct images free from chromatic aberrations by efficiently merging all-focus images, which were reconstructed from the sparse and incomplete information characteristic of separate channels.This innovation effectively surmounted the inherent limitations imposed by chromatic dispersion in diffractive lenses.A cornerstone of our methodological framework was the strategic employment of the system's PSF as the measurement matrix within the compressed sensing paradigm.By leveraging the PSF's intrinsic incoherence, we achieved a comprehensive recovery of information from channels affected by chromatic aberration, thereby not only correcting chromatic aberration but also significantly enhancing imaging quality.The empirical validation of our algorithm through a series of meticulous fabrication experiments underscored the practical viability of our approach, illustrating its superiority over previous computational imaging designs.Our research is further distinguished by the employment of a larger aperture (40 mm) diffractive lens, surpassing the limitations of contemporary end-to-end design frameworks constrained by extensive datasets and considerable computational demands.Hence, our approach broadens the scope of diffractive lens design by sidestepping these constraints.
While our contributions signify a substantial leap forward in correcting chromatic aberration and advancing diffractive lens design, we recognize the ongoing need for refinement.Enhancing image clarity remains a pivotal aim for our future research endeavors.By persistently refining our algorithm and design strategy, we aim to unveil further potential for high-fidelity, chromatic aberration-free imaging, thereby catalyzing new applications and technological advancements in the optical domain.This integration of lens design optimization, advanced image processing algorithms, and innovative fabrication techniques embodies a comprehensive strategy towards achieving high-fidelity, chromatic aberrationfree imaging.Such collaborative efforts, as echoed in the work of [32,33], not only reinforce our findings but also pave the way for future innovations in optical imaging technologies.As underscored by [34], the continued exploration of diffractive optics and computational algorithms holds great promise for the field, heralding a new era of optical solutions that enrich both the scientific community and technological applications.Our future work will not be limited to imaging at three wavelengths but will expand to multi-wavelength, that is, multispectral imaging, to capture information across a broader range of frequencies.Through this approach, we aim to achieve a more comprehensive and detailed analysis of target scenes.Multispectral imaging technology can provide richer information than traditional single-wavelength or three-wavelength imaging, including but not limited to the chemical composition of materials, surface textures, and the ability to differentiate between different objects.The development of this technology will significantly enhance our depth and breadth of understanding of complex scenes, thereby playing a crucial role in various fields such as environmental monitoring, medical diagnosis, and the authentication of artworks.Furthermore, we plan to develop more advanced image processing algorithms to handle the multispectral data, addressing the additional complexity introduced by the increase in the number of wavelengths.Our goal is to optimize image quality and accuracy through these algorithms while maintaining efficient computational efficiency to ensure real-time processing and analysis of this data.In summary, we believe that by extending to multispectral imaging and combining it with advanced image processing techniques, we can break through current limitations and begin a new chapter in our understanding of the material world.
Figure 1 .Figure 2 .
Figure 1.Left shows a schematic of chromatic dispersion in diffractive lenses; right illustrates the structure of a harmonic diffractive lens.
Figure 3 .
Figure 3. (a) Height map of the harmonic diffractive lens obtained through the gradient descent method, fabrication and prototype assembly of the device; (b) fabrication of the optimized height map and assembly of the prototype; (c) experimental setup for outdoor scene photography.
Figure 4 .
Figure 4. SR of our designed harmonic diffractive lens.The lens demonstrates superior focusing performance between 550 nm and 600 nm, exhibiting a performance peak as depicted in Figure (a), which correlates with the minimal defocus amount, f + ∆ f where ∆ f is zero, signifying optimal focusing at the design wavelength.As the wavelength deviates from this value, a shift in focal length is observed, accompanied by a reduction in diffraction efficiency, emphasizing the wavelengthdependent behavior of diffractive lenses.There is a good SR (greater than 0.8) in the range of 500-600 nm, as shown in (b).
Figure 5 .
Figure 5.An analysis of chromatic and achromatic images in the gradient domain is conducted, where the achromatic image exhibits sparsity in the gradient domain, while the chromatic image shows non-sparsity.This meets the conditions of compressed sensing for image recovery.
Figure 6 .
Figure 6.We conducted both indoor and outdoor experiments.(a) Illustration of our indoor experimental system, where photos are displayed on a screen and captured using our assembled system.(b) System diagram for acquiring the system's point spread function (PSF) using a parallel light tube.(c) Schematic diagram corresponding to (b).
Figure 7 .
Figure 7. Image reconstruction results.It is worth noting that due to the issue of pixel value matching when photographing the screen, some information will be lost, leading to a decline in the indicators.
Figure 8 .
Figure 8.The captured images were processed using our proposed algorithm for chromatic aberration correction and were compared with other algorithms.The results from other algorithms still exhibit a fogging effect, while ours visibly demonstrate superior performance.
Figure 9 .
Figure 9.Additional experimental renderings prove the robustness of our algorithm and its advantages over the other two algorithms. | 7,495.6 | 2024-04-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Metagenomic insights into zooplankton‐associated bacterial communities
Summary Zooplankton and microbes play a key role in the ocean's biological cycles by releasing and consuming copious amounts of particulate and dissolved organic matter. Additionally, zooplankton provide a complex microhabitat rich in organic and inorganic nutrients in which bacteria thrive. In this study, we assessed the phylogenetic composition and metabolic potential of microbial communities associated with crustacean zooplankton species collected in the North Atlantic. Using Illumina sequencing of the 16S rRNA gene, we found significant differences between the microbial communities associated with zooplankton and those inhabiting the surrounding seawater. Metagenomic analysis of the zooplankton‐associated microbial community revealed a highly specialized bacterial community able to exploit zooplankton as microhabitat and thus, mediating biogeochemical processes generally underrepresented in the open ocean. The zooplankton‐associated bacterial community is able to colonize the zooplankton's internal and external surfaces using a large set of adhesion mechanisms and to metabolize complex organic compounds released or exuded by the zooplankton such as chitin, taurine and other complex molecules. Moreover, the high number of genes involved in iron and phosphorus metabolisms in the zooplankton‐associated microbiome suggests that this zooplankton‐associated bacterial community mediates specific biogeochemical processes (through the proliferation of specific taxa) that are generally underrepresented in the ambient waters.
Introduction
Zooplankton and microbes are fundamental components of the ocean's lower food web. Crustacean zooplankton release copious amounts of particulate organic matter (POM) originating from phytoplankton, heterotrophic microzooplankton and detritus into the ambient water (Heinle et al., 1977;Calbet, 2001). Heterotrophic microbes are responsible for most of the dissolved organic matter (DOM) mineralization in the open ocean (Azam et al., 1983;Cherrier et al., 1996). These two components of the marine food web are generally treated as separate entities only connected through the trophic cascades, albeit, microbes and zooplankton are dynamically linked at different ecological levels (Azam and Malfatti, 2007;Tang et al., 2010). For example, microbes may exploit zooplankton as a nutrient-and carbonenriched microhabitat by colonizing its exoskeleton and/ or gut (Carman and Dobbs, 1997;Tang et al., 2010). Additionally to the nutrient-enriched conditions, the zooplankton's gut provides a hypoxic environment that may facilitate marginal but important anaerobic processes such as denitrification, dissimilatory nitrate or nitrite reduction and methanogenesis in the oxygenated open waters (De Angelis and Lee, 1994;Tang et al., 2011;Glud et al., 2015;Stief et al., 2017). Finally, the zooplankton's acidic digestive tract may promote iron recycling and solubilization via multiple pathways involving microbes (Tang et al., 2011;Nuester et al., 2014;Schmidt et al., 2016). These processes deliver bioavailable iron to the ambient water that can be utilized by phytoplankton, thus promoting iron fertilization (Schmidt et al., 2016).
Even though the majority of microbes are free-living, several studies have shown that the abundance of zooplankton-associated bacteria can be orders of magnitude higher than free-living bacteria on a per volume base (Tang et al., 2006;Tang et al., 2010;Tang et al., 2011;Schmidt et al., 2016). Culture-based studies indicated that zooplankton-associated microbial (i.e., bacterial and archaeal) communities consist of similar taxa as ambient water communities (Delille and Razouls, 1994;Hansen and Bech, 1996). However, recent culture-independent studies reveal a strong niche partitioning of bacterial communities between the zooplankton and the surrounding waters probably driven by the different physico-chemical conditions (Grossart et al., 2009;De Corte et al., 2014). These findings also suggest an active microbial exchange between the two habitats in which each environment favours the proliferation of specific taxa generally underrepresented in the other environment (Grossart et al., 2009;De Corte et al., 2014).
The aim of this study was to compare the phylogenetic composition of the bacterial community inhabiting the ambient water with that associated with different crustacean zooplankton species collected in the North Atlantic Ocean using 16S rRNA gene Illumina sequencing. Zooplankton individuals were collected during day and night to assess whether the feeding status influences the zooplankton-associated bacterial community. Finally, metagenomic analyses provided insights into the yet underexplored metabolic interaction between the zooplankton and the associated bacterial community.
Bacterial community richness and diversity
Rarefaction analyses [phylogenetic diversity (PD), Chao richness and observed operational taxonomic units (OTUs)] showed clear differences between zooplanktonassociated and ambient water bacterial communities (Supporting Information Fig. S1). The rarefaction curves for zooplankton-associated bacterial communities approached a plateau, however, the rarefaction curves of the ambient water communities did not level off (Supporting Information Fig. S1). Additionally, the PD and Chao richness were significantly higher (T-test, P < 0.001) for the ambient water than the zooplanktonassociated bacterial community ( Fig. 1A-C, Supporting Information Table S1). Furthermore, the Simpson evenness was significantly higher in the zooplankton-associated than in the free-living bacterial community (Fig. 1D, Supporting Information Table S1). No significant differences were found between diversity, richness and evenness indexes of ambient water bacterial communities collected at different depth layers or between zooplankton-associated communities collected during the day versus night (ANOVA on rank, P > 0.001) ( Fig. 1 and Supporting Information Fig. S1, Table S1). Members of the zooplankton-associated bacterial community were more evenly distributed than those of the ambient water and exhibited a comparatively low diversity with few but abundant OTUs.
Bacterial community composition in zooplankton versus ambient water
Principal Coordinates Analysis (PCoA) using weighted Unifrac distances (Lozupone et al., 2011), was used to statistically explore and visualize the similarity between the different bacterial communities. PCoA analysis clearly separated ambient water and zooplanktonassociated bacterial communities (Fig. 2), with the first coordinate accounting for 61% and the second for 11% of the variance. Zooplankton-associated communities clustered together and, within them, clustered according to the taxon of the zooplankton individuals ( Fig. 2A) and to the sample location (Fig. 2B). We did not observe clustering associated to the time of collecting (day vs. night) the zooplankton (Fig. 2C). The shared OTUs among groups of samples (surface and mesopelagic free-living bacterial communities and zooplanktonassociated bacterial communities collected during day and night) were determined with Mothur (Schloss et al., 2009) from the OTU distribution obtained in QIIME. Only 0.7% of the OTUs were shared and ubiquitously present in the zooplankton-associated and ambient water communities (Supporting Information Fig. S2). The zooplankton samples (day vs. night) shared only 10% of the total OTUs, whereas the communities from the two depth layers (subsurface and mesopelagic) shared 18% of the OTUs. Therefore, the number of shared OTUs was higher within ambient water bacterial communities than within zooplanktonassociated bacterial communities. In addition, the contribution of unique OTUs was higher in the ambient water than in zooplankton-associated bacterial communities (Supporting Information Fig. S2).
Ecotypes in the zooplankton-associated bacterial community
Oligotyping, a supervised computational method to investigate the diversity of closely related but distinct bacterial organisms, was used to classify selected bacterial phylogenetic groups. Oligotyping analysis at the nucleotide level of the Flavobacteriaceae and Rhodobacteraceae (the two dominant families in the zooplankton-associated bacterial community) identified 15 quality-controlled oligotypes selected from the highest entropy values (using 16 and 53 components for Flavobacteriaceae and Rhodobacteraceae respectively) ( Fig. 4A and B). The zooplankton-associated oligotypes largely differed from the bacterioplankton oligotypes in the ambient water (Supporting Information Fig. S3). The z-score distribution of the Rhodobacteraceae oligotypes resulted in two main clusters, in agreement with the clustering of samples obtained according to the Bray Curtis similarity index. The first cluster grouped samples obtained from Calanus sp. and Paracalanus sp., and the second one grouped samples obtained mainly from Paraeuchaeta sp. (Fig. 4A and C). Flavobacteriaceae oligotypes did not cluster according to zooplankton species, location or time of the day ( Fig. 4B and D).
Metagenomic analysis of the zooplankton-associated microbial community
The metagenomic data obtained from the copepodassociated (Calanus sp. and Paraeuchaeta sp.) microbial community were used to characterize the potential metabolic pathways present in the microbial consortium associated with the zooplankton's gut and/or carapace.
Genes indicative of pH homeostasis. The metagenomic analysis indicated the presence of genes encoding Table 1. Relative contribution (%, SD) of the most abundant family to the total number of sequences associated with zooplankton samples collected during day (750 m) and night (250 m) and from ambient water samples collected from the subsurface and upper (300-500 m) and lower (1000 m) mesopelagic layer. Table S2). These transporter-associated genes were mainly affiliated with the Alphaproteobacteria (54%) and Bacteroidetes (32%) (Fig. 5, Supporting Information Table S2). The uptake of ammonium or release of ammonia may also be used to raise the cell's pH to control its cytoplasmic acidity. In this context, ammonia transporter and several metabolic genes, such as serine dehydratase (SDH), ornithine cyclodeaminase (OCD), alanine dehydrogenase (ALD) and dissimilatory nitrite reductase (NIRB, NIRD) were also found in the metagenome of zooplankton-associated bacteria (Fig. 6, Supporting Information Table S2). The ammonium transporters accounted for a similar number of reads as the potassium/proton antiporter and proton transporters, mainly associated with Alphaproteobacteria (26%) and Flavobacteriaceae (47%), the two dominant groups of the zooplankton-associated bacterial community (Fig. 5, Supporting Information Table S2). Additionally, several other genes, such as glutamate decarboxylase (GAD) (present in diverse phyla), arginine decarboxylase (mainly in Bacteroidetes 64%) and carbonic anhydrase (mainly associated with Bacteroidetes 72%) (Fig. 5, Supporting Information Table S2) may also play an important role in regulating the cellular pH by utilizing [H 1 ] ions in their metabolic reactions and thus increasing the cytosol pH.
Energy transduction-and cell protection-related genes.
Several genes related to glycosyl hydrolase (GH), such as chitinase encoding genes (CBD19), were found in the metagenome (Fig. 6). Chitinase encoding genes were widespread among the bacterial community (Fig. 5) while chitin deacetylase was associated exclusively to Bacteroidetes (100% of the chitin deacetylase reads). Other genes involved in the degradation of complex molecules released by the zooplankton through digestive processes were also found. Cellulase accounted for 200 reads mainly associated with Bacteroidetes (53%), while amylase accounted for 431 reads mostly related to Bacteroidetes (42%) and Gammaproteobacteria (31%) (Fig. 5, Supporting Information Table S2). These two genes encode enzymes primarily involved in the degradation of cellulose, starch and other related polysaccharides of phytoplankton origin. Additionally, metagenomic analysis revealed several genes encoding proteins involved in anaerobic metabolic pathways. Genes indicative of fermentative pathways were also present in the metagenomes, for example, genes encoding enzymes involved in the transformation of pyruvate into lactate [pyruvate ferredoxin oxidoreductase (PFOR)] and acetyl CoA into ethanol (aldehyde dehydrogenase, alcohol dehydrogenase and acetyl CoA synthase; ALDH, ADH) (Fig. 6), potentially generating oxidizing agents such as NAD 1 for redox reactions (Fig. 6). Lactate dehydrogenase (LDH, Fig 6) and alcohol dehydrogenase (ALDH and ADH, Fig. 6) encoding genes were distributed amongst Alphaproteobacteria, Gammaproteobacteria and Bacteroidetes (Fig. 5).
Additionally, the zooplankton-associated bacterial community harboured many genes involved in iron utilization, primarily associated with Bacteroidetes, Alphaproteobacteria and Gammaproteobacteria. Iron ABC transporter and Fe 31dicitrate transporter fecA genes accounted for a large fraction of the iron related genes (with 200 and 1900 reads respectively, Fig. 5) whereas, iron chelation-associated genes such as ferrochelatase gene were present only at moderate abundance (141 reads, Fig. 5). We have also detected a ferric reductase gene that encodes for an oxidoreductase to inter-convert ferric (Fe 31 ) and ferrous (Fe 21 ) ion (Fig. 6).
Bacteria-mesozooplankton associations
Studies have shown that the bacterial community associated with crustacean zooplankton resides on the exoskeleton (epibiont) and/or is associated with the zooplankton's gut (endosymbiont) Eckert and Pernthaler, 2014). Culture-independent studies showed Alphaproteobacteria and Actinobacteria are the most abundant members of the bacterial community associated with marine and freshwater zooplankton, followed by Bacilli and Gammaproteobacteria (Grossart et al., 2009;De Corte et al., 2014). These previous findings are in partial agreement with those obtained in this study. We found that in the temperate and sub-arctic North Atlantic Ocean, the zooplankton-associated bacterial community is mainly composed of Flavobacteria, Alphaproteobacteria (particularly Rhodobacterales) and Gammaproteobacteria (Fig. 3). Flavobacteria represent the second most abundant clade after Proteobacteria in marine ecosystems (Glockner et al., 1999;Gomez-Pereira et al., 2010). Members of this bacterial clade are able to degrade high molecular weight organic matter, such as cellulose and chitin, suggesting a commensal or parasitic interaction between Flavobacteria and zooplankton (Cottrell and Kirchman, 2000;Beier and Bertilsson, 2013). Zooplankton's moults and carcasses are also a major source of chitin in the ocean, and their colonization by bacteria may also play a key role the C and N cycling of the ocean . Rhodobacteraceae, the second most abundant family found in the zooplankton-associated bacterial community (Fig. 3), have been reported to live associated with marine organisms, such as coral, sponges and microalgae (Ridley et al., 2005;Burke et al., 2011;Roder et al., 2014) and to contribute to biofilm formation (Pujalte et al., 2014), indicating that this group may play a major role in the colonization of the zooplankton exoskeleton.
The microbial community associated with the zooplankton's gut might consist of a transient (passing through the digestive system of the host) and a persistent bacterial community (Grossart et al., 2009;Tang et al., 2010). To test whether the diel cycle (which can be related to the feeding status) might influence the bacterial-host interactions, zooplankton samples were collected at different times of the day (day vs. night). In contrast to a previous report , our results do not indicate significant diel differences in the composition of the zooplankton-associated bacterial community (Fig. 2). Therefore, the zooplankton-associated bacterial community might have been shaped by other factors rather than the diel migration and/or feeding status. The taxaspecific microbiome and the strong dependence on the sampling location (Fig. 2) suggest that the ambient water microbial community and the presence of a suitable host are likely the main factors determining the composition of the zooplankton-associated bacterial community. However, only a few 16S rRNA gene oligotypes within the specific bacterial taxa analysed (i.e., Flavobacteriaceae, Rhodobacteraceae) dominated the zooplanktonassociated communities. This suggests that the zooplankton-associated bacterial community consists of specialized ecotypes belonging to only a few phylogenetic groups that act as an interactive community (such as a consortia) able to metabolize different compounds released by the zooplankton, either as exudates from the body surface or as a by-product of the digestion processes occurring in the zooplankton gut.
Zooplankton-associated bacterial community and its implication in the global biogeochemical cycles
The zooplankton-associated microbial community exploits zooplankton as a microhabitat. In this microhabitat, genes indicative for surface attachment and encoding for pili, fimbriae and chitin-recognition proteins are used to colonize the zooplankton's internal and/or external surface (Tran et al., 2011;Bodelon et al., 2013).
The high number of glycosyl hydrolase encoding genes (mainly associated with the Flavobacteria clade, Fig. 5) suggests the capability of the zooplankton-associated bacterial community to metabolize polysaccharides and amino-sugars, such as cellulose or chitin respectively (Beier and Bertilsson, 2013). In agreement with our findings, Flavobacteria have been shown to be able to utilize chitin and N-acetyl glucosamine (Cottrell and Kirchman, 2000). Taken together, these results suggest a tight association between crustacean zooplankton and Flavobacteria, the latter being able to metabolize high molecular weight organics from the zooplankton's exoskeleton. Intriguingly, we did not obtain sequences related to Vibrio spp., another important player in chitin mineralization often associated with crustacean zooplankton (Erken et al., 2015) in contrast to previous studies conducted in coastal systems (Montanari et al., 1999;Turner et al., 2009). This discrepancy could be explained by a relatively lower abundance of Vibrio ssp. in cold open ocean waters as compared to warm coastal regions (Vezzulli et al., 2012). Additionally, the presence of amylase and pectin esterase-encoding genes suggests the capability of zooplankton-associated bacteria to metabolize starch and pectin (Moal et al., 1987;Alderkamp et al., 2007) derived from crustacean zooplankton grazing on phytoplankton.
Metagenomic and -proteomic studies revealed that taurine might be an important substrate for heterotrophic marine bacteria (Poretsky et al., 2010;Sowell et al., 2011;Williams et al., 2012). The importance of taurine for bacterial growth has primarily been demonstrated using SAR11 cultures (Carini et al., 2013). The concentration and turnover rate of dissolved taurine in the ocean have only recently been determined (Clifford et al., 2017). Taurine is an organo-sulfonate found in the tissues of marine invertebrates such as zooplankton and is a potential source of carbon, nitrogen and sulfur for heterotrophic bacteria (Williams et al., 2012;Carini et al., 2013). Thus, the copepodassociated bacterial community is in close proximity to the primary source of taurine, the copepod's body. The presence of taurine catabolic genes in the metagenomes such as the taurine-pyruvate aminotransferase and sulfoacetaldehyde acetyltransferase indicates the potential importance of taurine as a substrate for zooplanktonassociated bacterial communities (Figs 5 and 6). Surprisingly, even though most of the taurine catabolic genes were associated with Alphaproteobacteria, none were affiliated to SAR11, likely due to the low contribution of this clade to the zooplankton-associated bacterial community.
The copepod's hindgut exhibits low oxygen concentrations (from suboxic to anoxic) and low pH (Tang et al., 2011) suggesting that copepods' guts are microhabitats suitable for anaerobic microbes able to tolerate acidic conditions. The metagenome of the zooplankton associated community harbours genes indicative for pH regulation of the cytosol's acidity by removing protons or using ammonia as proton scavenger to produce ammonium (Booth, 1985;Slonczewski et al., 2009).
Recent publications have documented not only high level of dissimilatory nitrate and nitrite reduction activity but also the presence of genes involved in DNRA pathways in marine zooplankton-associated microbial communities (Glud et al., 2015;Stief et al., 2017). Genes encoding enzymes for DNRA were also retrieved in the metagenome presented here. However, our data (based on the relative gene abundances) point towards ammonium biosynthesis (thus, assimilatory nitrate/nitrite reduction) rather than N 2 gas dissipation by bacteria (Supporting Information Table S2). In contrast to previous reports on the presence of anammox and anaerobic methane oxidation on zooplankton's carcasses sinking through oxygen minimum zone (Stief et al., 2017), the metagenome of zooplanktonassociated microbial communities from the open, welloxygenated Atlantic lacked genes indicative for these anaerobic pathways.
Microbes living at neutral or basic pH, such as marine free-living bacteria, are exposed to low iron availability due to the insolubility of ferric iron (Fe 31 ). However, the environmental conditions in the copepod's gut, characterized by low pH and low oxygen (Tang et al., 2011), favour the bioavailable ferrous form (Fe 21 ) and thus have the potential to facilitate iron remineralization (Tang et al., 2011;Nuester et al., 2014;Schmidt et al., 2016). Therefore, ferrous ions could be directly available for the cytochrome c production without the need of ferric-reductase (Schroder et al., 2003). However, the gene encoding this latter enzyme was relatively abundant in the copepodassociated bacterial community (Fig. 5, Supporting Information Table S2), suggesting that iron is present in different forms in the zooplankton microhabitat. Thus, the zooplankton-associated bacterial metabolic pathways could play an important role in the recycling of iron following the zooplankton grazing of diatoms in the euphotic layers in iron limited regions of the global ocean (Hutchins and Bruland, 1994;Hutchins et al., 1995). Moreover, the zooplankton grazing on phytoplankton may also play an important role in the phosphorus recycling of the open ocean (Corner, 1973;Olsen et al., 1986). The bioavailable phosphorus released by the zooplankton (mainly in form of inorganic phosphate) could be directly metabolized by the copepod-associated bacterial community, which harbours large number of genes encoding for phosphate transporters (Fig. 5, Supporting Information Table S2). Hence, gut associated bacteria might scavenge phosphate within the gut and consequently reduce the number of phosphorous compounds released via faecal pellet production into the environment. Additionally, the presence of detoxification genes implies that the zooplankton-associated bacterial community responds to the presence of toxic by-products derived from digestive processes occurring in the host's gut.
The contribution of archaea to the zooplanktonassociated microbial community was negligible in the present study. Only a low number of reads associated to amylase of archaeal origin were found (Supporting Information Table S2). Our results suggest that in the zooplankton's gut, bacteria outcompete archaea. However, a previous study showed that zooplankton digestive tracts are most likely sites for methanogenesis (De Angelis and Lee, 1994), a process mediated by archaea. Our results further suggest that methane production by zooplanktonassociated communities is most likely a species-specific process. Thus, only specific zooplankton species might be suitable hosts for methanogenic archaea and fuel methane production in oxygenated waters.
Conclusion
In the North Atlantic Ocean, the bacterial community associated with crustacean zooplankton is mainly shaped by the zooplankton host (taxa-specific interactions) and the bacterial community of the ambient water to which the zooplankton host is exposed. The zooplankton-associated bacterial community is highly specialized, able to adhere and colonize internal and/or external surfaces, and to utilize high molecular weight organic compounds and metabolites, such as taurine released by zooplankton. Therefore, the zooplankton-bacteria consortium can mediate specific biogeochemical processes (through the proliferation of specific bacterial taxa) that are generally underrepresented in the ambient waters. However, further studies to quantitatively assess the contribution of these communities to the global biogeochemical cycles are required.
Study area and sampling
Water samples were collected during the MEDEA-II cruise (June-July 2012) at four different stations located between 50851 0 N 28851 0 W and 66801 0 N 02841 0 W (Supporting Information Fig. S4). Seawater samples were collected with a rosette sampler equipped with 25L Niskin bottles. To characterize the bacterial community of the ambient waters, 10 L of seawater were sampled from the lower euphotic layer (100 m) and the upper (300-500 m) and lower (1000 m) mesopelagic layer. The seawater was filtered onto 0.2 lm GTTP membrane filter (Millipore) and the filters stored at 2808C until further processing in the laboratory. Mesozooplankton samples were collected twice per day at the same station as the ambient water using vertical plankton tows (200 lm mesh size, hoisted at 30 m min 21 ) from 200 m during the night, and from 750 m depth during the day. These are the depth layers within which the majority of the crustaceous zooplankton migrates over a diel cycle (Steinberg et al., 2000;Tang et al., 2010). The content of the cod end of the plankton net was transferred into a plankton splitter and concentrated over a 70 lm mesh Nitex screen. The zooplankton samples were then transferred into 50 ml Greiner tubes and stored at 2808C until sorting. Once at the home laboratory, zooplankton individuals were thawed at room temperature and transferred to a Petri for sorting the dominant crustacean zooplankton taxa (i.e., Calanus, Paracalanus, Paraeuchaeta, Themisto, Evadne and Oncaea). To evaluate the zooplankton species-associated bacterial community, 10 individuals of each taxon were collected under a dissecting microscope using clean forceps or a sterile pipette and transferred into sterile Eppendorf tubes for nucleic acid extraction.
DNA extraction
The DNA of the ambient water samples was extracted using Ultraclean Soil DNA isolation Kit (MoBIO Laboratories). Zooplankton DNA was extracted using the phenol-chloroform extraction protocol (Weinbauer et al., 2002), preceded by a bead-beating step to facilitate lysis of the zooplankton individuals. NanoDrop ND-1000 spectrophotometer (NanoDrop Technologies) was used to check the quality of the extracted DNA.
Next generation sequencing and bioinformatics analyses of the bacterial 16S rRNA genes
The 16S rRNA genes of the zooplankton-associated and ambient water bacterial communities were PCR amplified with the bacterial primers 341F (5 0 -CCTACGGGNGGCWGCAG-3 0 ) and 805R (5 0 -GACTACHVGGGTATCTAATCC 23 0 ) ( Klindworth et al., 2013). PCR amplification of the 16S rRNA gene was carried out in 25 ll reaction volume using Fermentas Taq polymerase (Thermo Scientific) in a Mastercycler (Eppendorf) with the following parameters: initial denaturation at 958C for 5 min, followed by 30 cycles of denaturation at 948C for 1 min, annealing at 57.58C for 30 s and extension at 728C for 45 s, with a final extension at 728C for 7 min. The PCR products were additionally purified with a PCR purification kit (5-Prime). The quality of the PCR product was checked on 2% agarose gel. The 16S rDNA amplicons were subsequently sequenced with Illumina Miseq high-throughput sequencing (2 3 250 bp paired-end platform) at IMGM Laboratories GmbH (Martinsried, Germany).
The bioinformatic analysis of the 16S rRNA gene sequences followed the standard operating procedure pipeline of QIIME (Caporaso et al., 2010). Rarefaction curves, PD, Chao1, OTU richness, Shannon index of diversity and the Simpson evenness index were calculated with QIIME. Pairwise UniFrac distance and principal coordinate analysis (PCoA) (Lozupone et al., 2011) were used to compare the bacterial community composition between the samples (implemented in QIIME). A t-test (implemented in Sigma Plot v.11) was used to assess differences between samples. Oligotyping analysis of Flavobacteriaceae and Rhodobacteraceae families was conducted following Eren's lab pipeline (available from http://oligotyping.org) (Eren et al., 2013).
Prokaryotic DNA isolation, whole genome amplification and metagenomic analysis
Forty copepod individuals (20 Calanus sp. and 20 Paraeuchaeate sp.) were used for the metagenomic analysis of the zooplankton-associated bacterial community. Since the genomic material extracted from the zooplankton-associated samples contained both eukaryotic and prokaryotic DNA, DNA extracts were treated with the Looxster Enrichment kit (Analytikjena, Germany) following the manufacturer's protocol to enrich the bacterial DNA and remove the eukaryotic DNA. The resulting genetic material (enriched in prokaryotic DNA) was subsequently amplified with GenomePlex whole genome amplification kit (Sigma-Aldrich) following the manufacturer's instructions. The quality of the amplified DNA was checked on 2% agarose, and the DNA was afterwards purified with a PCR purification kit (5-Prime). The isolated DNA was used to construct a Nextera library (San Diego, USA). The obtained library was subsequently sequenced with Illumina Miseq highthroughput sequencing (2 3 250bp paired-end platform) at IMGM Laboratories GmbH (Martinsried, Germany).
The filtered prokaryotic reads obtained from the Illumina sequencing were screened for sequence similarity against KEGG GENES protein database using the DIAMOND BLASTX (Buchfink et al., 2015) with an e-value cutoff of 10 25 and a minimum alignment length cutoff of 30 amino acids. Subsequently, the resulting reads were compared to NCBI non-redundant database using DIAMOND BLASTX with default parameters. Taxonomic classification and functional annotation to KEGG functions was analysed using MEGAN v5.10 (Huson et al., 2007) with 50 LCA minimum score, 10 25 maximum expected and minimum support set to 1.
The sequence data generated are publicly available in the DDBJ database under the accession numbers DRA005574 (metagenome) and DRA005573 (amplicons).
Supporting information
Additional Supporting Information may be found in the online version of this article at the publisher's web-site: Table S1. Total number of OTUs (cutoff 97% similarity), Chao species richness, phylogenetic and Shannon diversity indexes and Simpson evenness obtained from 16S rDNA sequences from ambient water and zooplankton-associated bacteria. Table S2. Number of reads for the main metabolic pathways of the Calanus sp, and Paraeuchaeata sp associated bacterial community, their phylogenetic affiliation (expressed in relative abundance) and their lowest and highest taxonomic identity (in %). | 6,140.4 | 2017-10-27T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Sustained Action of Developmental Ethanol Exposure on the Cortisol Response to Stress in Zebrafish Larvae and Adults
Background Ethanol exposure during pregnancy is one of the leading causes of preventable birth defects, leading to a range of symptoms collectively known as fetal alcohol spectrum disorder. More moderate levels of prenatal ethanol exposure lead to a range of behavioural deficits including aggression, poor social interaction, poor cognitive performance and increased likelihood of addiction in later life. Current theories suggest that adaptation in the hypothalamo-pituitary-adrenal (HPA) axis and neuroendocrine systems contributes to mood alterations underlying behavioural deficits and vulnerability to addiction. In using zebrafish (Danio rerio), the aim is to determine whether developmental ethanol exposure provokes changes in the hypothalamo-pituitary-interrenal (HPI) axis (the teleost equivalent of the HPA), as it does in mammalian models, therefore opening the possibilities of using zebrafish to elucidate the mechanisms involved, and to test novel therapeutics to alleviate deleterious symptoms. Results and Conclusions The results showed that developmental exposure to ambient ethanol, 20mM-50mM 1-9 days post fertilisation, had immediate effects on the HPI, markedly reducing the cortisol response to air exposure stress, as measured by whole body cortisol content. This effect was sustained in adults 6 months later. Morphology, growth and locomotor activity of the animals were unaffected, suggesting a specific action of ethanol on the HPI. In this respect the data are consistent with mammalian results, although they contrast with the higher corticosteroid stress response reported in rats after developmental ethanol exposure. The mechanisms that underlie the specific sensitivity of the HPI to ethanol require elucidation.
Introduction
In humans and other mammals, the hypothalamo-pituitary-adrenal (HPA) axis may have a crucial role in the physiological and behavioural response to addictive agents, including alcohol, and in the processes involved in withdrawal and reinstatement [1]. Acutely, alcohol treatment generally enhances corticosteroid secretion in rats [2,3] and in human subjects, probably as a consequence of increased ACTH secretion [4,5]. This in turn is a response to enhanced CRF and AVP production and release from the PVN [6][7][8]. On the other hand, binge-exposure and chronic alcohol abuse blunt these actions, leading to depressed HPA activity and corticosteroid secretion [9]. Both in humans and in animal models, alcohol withdrawal may then lead to enhanced plasma (or salivary) corticosteroid [10][11][12][13], and, in rats, more prolonged corticosteroid elevation in the brain [13]. Elevated corticosteroid may in turn promote relapse [14], indeed some authors consider corticosteroid to be an essential component of behavioural responses to alcohol and other drugs [15][16][17] In development, exposure to ethanol and other addictive substances has been associated with later susceptibility to behavioural deficits, including addiction. These form part of a spectrum of deleterious effects, known as Fetal Alcohol Spectrum Disorder (FASD) [18].
It is unclear to what extent the sequelae of developmental alcohol exposure are attributable to its actions on the HPA axis, though in mammals much evidence suggests the HPA is significantly perturbed. Thus both male and female rats that have been developmentally exposed to ethanol show a greater increase in plasma ACTH and corticosterone after stress, though baseline levels are usually unchanged [19][20][21][22][23][24].
The value of the zebrafish as a model for the study of the behavioural effects of ethanol has been clearly demonstrated [25][26][27][28], and the effects on behaviour of ethanol exposure during development reflect to a degree findings obtained in mammals [29].
To understand the role of the zebrafish hypothalamus-pituitary-interrenal (HPI) axis (the homologue of the mammalian HPA) in such responses, the present study set out to determine the effects of developmental exposure to ethanol on subsequent cortisol levels and responses to stress.
Animal maintenance
All animal work was carried out following approval from the Queen Mary Research Ethics Committee, and under licence from the Animals (Scientific Procedures) Act 1986. Care was taken to minimize the numbers of animals used in this experiment in accordance with the AR-RIVE guidelines (http://www.nc3rs.org.uk/page.asp?id=1357. Fish were bred and reared in the aquarium facility at Queen Mary University of London, licenced by the UK Home Office. Zebrafish (Danio rerio) adults from the Tuebingen wild type (TUWT) line were kept in glass breeding tanks in fish water that contain sodium bicarbonate (0.9mM), calcium sulphate (0.05mM) and marine salts (Sigma, Poole, UK; 0.018g/l). Fish were maintained on a constant 14h light: 10h dark cycle at 28°C. They were fed 3 times a day with Zmsystems ZM-000 high protein food particle (Tecniplast UK. London) from 5dpf-10dpf, ZM-100 and paramecium from 11dpf-14dpf, and, ZM-200 and brineshrimp from 14dpf-30dpf. At one month of age, animals were transferred into the aquaria where they were fed Zmsystems flake food and brineshrimp.
Embryo spawning
Fertilised eggs were collected by natural spawning. At 24h post fertilisation (pf) morphological criteria such as the head-trunk angle and the optic vesicle length [30] were used to evaluate their embryonic stage. Embryos were distributed into groups of 50 in 40ml aquarium water in petri dishes then reared in an incubator at 28°C. Ethanol was added as required.
Larval size assessment
Larval size at 9dpf was determined using eLaborant, an automated image detection software made by eLaborant (Leiden, Netherlands). The method involves tracing a virtual line around the image of the animal, estimating both main axis length and the pixels contained within traced line perimeter.
Tissue dry weight was obtained by homogenizing 25 larvae in 1.5mL pre-weighed microcentrifuge tubes, using 500μl of ice-cold 3.5% v/v perchloric acid with a microcentrifuge tube pestle on ice. Samples were then evaporated in a Univapo 100H speed vac for 1 hour, using a Unijet II aspirator vacuum pump. Microcentrifuge tubes were weighed again using a precision analytical Sartorius 2006 MP scale, to obtain the dry weight of the samples.
Control animals or those treated with 20mM and 50mM ethanol solutions were photographed in high resolution (4064 x 4064) using a Nikon D800 camera. The software analysed 60 larvae in each group and provided the pixel count/ central axis length per animal.
Alcohol treatment and uptake.
For developmental ethanol exposure, treated larvae were exposed from 1-9dpf to 20mM and 50mM GPR ethanol (VWR, Lutterworth UK). Controls were handled similarly, but ethanol was omitted.
To verify ethanol uptake, ethanol concentrations in embryonic and larval tissue were assessed using an alcohol dehydrogenase based spectrophotometric method adapted from Reimers et al [31].
After treatment with ethanol, 25 live embryos or larvae (with intact chorions if applicable) were transferred to 1.5 ml microcentrifuge tubes (Starlab, Milton Keynes, UK) on ice. Animals were quickly rinsed twice with 500μl of ice-cold distilled water and homogenized with a microcentrifuge tube pestle in 3.5% v/v perchloric acid (500μl, Sigma).
Samples were centrifuged at 4°C for 10 minutes at 12,000g and then stored in paraffin sealed tubes at 4°C until all samples were collected, or placed on ice and immediately used. Standard curves were constructed using six ethanol standards, ranging from 100mM to 3.125mM, yielding a non-linear quadratic polynomial function (r 2 = 0.99). Two replicates for each standard and samples were produced. The initial reaction mixture was 870μl of NAD+ (Sigma; 1mg/ml in 0.5M Tris, pH 8.8) with 43.5μl of the standard or the sample (in perchloric acid) in 1.5ml microcentrifuge tubes.
To start the reaction, 86.5μl ADH (Sigma; 0.75mg/ml in water) was added to the microcentrifuge tube, the cap was closed, and the content mixed and incubated at 37 o for 10 min then transferred to a Starlab 1.5ml semimicro cuvette. NADH production was evaluated from its absorption at 340nm. A blank reaction substituting 3.5% v/v perchloric acid for the ethanol solutions for was used to calibrate the spectrophotometer initially.
Stress tests: Air exposure and freezing
For the air exposure challenge, 9dpf larvae were transferred into 6-well plates wells mounted with sieve inserts. Wells were filled with 10ml of fish water. Larvae were grouped 12 animals per well, and left to habituate for 1 hour in a lit and silent environment. Sieve inserts were lifted and placed in paper towels for 1 min, air exposing the subjects inside the sieve, then immediately replaced in the wells. Control animals remained in the wells.
Animals were then quickly pipetted into 1.5ml microcentrifuge tubes, water was removed, and tubes were immediately flash frozen in liquid nitrogen and stored at -80 o .
Adult fish were individually netted from pair-housed tanks into white opaque tanks (42.5cm length x 16cm width x 17.5cm height) containing fish water (2l). These tanks contained a smaller clear acrylic bottom-perforated tank insert (Aquatic Habitats, Apopka, USA), to allow easy and quick air exposure of the animals.
Tanks were covered with an opaque lid, and fish were left to habituate for 1h in a lit and silent environment. Preliminary experiments indicated that, in contrast to the larvae, there was no cortisol response when the adults were air exposed for 1 min, but there was a good response after just 30s (see Results). Accordingly, experimental animals were routinely air exposed by lifting the small acrylic tank for 30s, before placing in the larger opaque tank.
After 6 min animals were transferred into fish water at 0°, dried on a paper towel, weighed and placed into 7ml polystyrene sample containers (Sterilin, Newport Gwent, UK), flash frozen in liquid nitrogen, and stored at -80 o .
Larval whole body cortisol extraction
A modified version of the protocol of Alderman et al. [32] was used for homogenization and extraction. Fish were thawed in microcentrifuge tubes on ice and homogenized in 200μl of ice-cold 1x phosphate buffered saline (PBS, Sigma) for 10s using a Sonoplus UW2070 ultrasonicator (Bandelin, Berlin, Germany). PBS (200μl), was used to rinse the equipment's needle, and collected into the microcentrifuge tube. Aliquots (50μl) were withdrawn for protein quantification.
Cortisol was extracted into 500μl ethyl acetate (Fischer Chemical, Loughborough, UK). Samples were vortexed for 30 seconds, centrifuged at 5000rpm for 10 min and frozen at -80C o . The organic layer was then decanted into 12ml glass screw-top vials (VWR, Lutterworth, UK). The extraction was repeated twice more.
Tubes containing the combined ethyl acetate extracts were placed in a waterbath set at 60°C and the ethyl acetate evaporated under a stream of nitrogen. PBS (200μl) was added to the tubes, vortexed for 30 seconds, and kept at -20 o C until assessed using a human salivary cortisol kit (Salimetrics Newmarket, UK).
Adult whole body cortisol extraction
Fish were thawed on ice in the 7ml Sterilin containers, then weighed and homogenised in 2 X PBS (w/v). The rotor blade was washed with an equal volume of PBS (i.e. 2 X BW w/v), which was then added to the homogenate.
Ethyl acetate extraction was performed as above. However, in the extracts of adult fish, an excess of lipid was precipitated on the vessel wall after evaporation. To prevent any interference with the assay, this was eliminated by partitioning the dried extracts between 500μl PBS and 500μl hexane (BDH, Poole, UK), and the organic layer was discarded. The aqueous phase was stored at -20 o C until required for assay. Recoveries of authentic cortisol added to tissue homogenates and extracted by this procedure was~100%.
Cortisol assay
Samples were thawed on ice and 50μl used for assay, performed according to the manufacture's specifications.
Adult whole body cortisol values were normalised against body weight, and larval and juvenile cortisol values were normalized against tissue protein content.
The Salimetrics human salivary kit was validated for zebrafish whole body cortisol use. The minimum cortisol value that gave readings significantly different from zero was 0.12ng/sample. Both inter-and intra-assay coefficients of variation obtained from replicate assays of tissue extracts were less than 6%. When authentic cortisol (3ng) was added to tissue homogenates and assay values compared with identical samples without addition, recovery was between 101% and 104% and the correlation between the two sets of data gave r 2 = 0.94, (P<0.0001). Specificity data provided by Salimetrics show negligible cross-reactivity with a range of possible contaminating steroids.
Statistics
The dependent variables in the different experiments were, respectively, tissue ethanol concentration, animal size, weight, and whole-body cortisol concentration. These were assessed in the various treatment groups, viz. handling control and ethanol exposure at different concentrations. ANOVA and Student t-tests were used to assess statistical significance. The tests were evaluated with respect to type-1 error rate of 0.05.
Ethanol uptake
Zebrafish embryos acutely exposed to ethanol for 24 hours showed differences in subsequent tissue ethanol concentration depending on their developmental stage (Fig 1A). At 1-24hpf, tissue concentrations approached those in the ambient water, but at 24-48hpf, tissue values were significantly lower, and at 48-72hpf ethanol concentrations stabilised at 36% of the ambient ethanol concentrations of both 20mM and 100mM. During chronic treatment from 3-9dpf, tissue ethanol content never significantly exceeded these values (Fig 1B) and the tissue/ambient concentrations can therefore be assumed to be in linear relationship. The ambient concentrations used in subsequent experiments of 20mM and 50mM were those calculated to deliver tissue concentrations to reflect moderate alcohol use in humans. Maximally, tissue ethanol reached about 40μg/mg dry weight.
Animal size
Neither larval size nor dry body weight were affected by alcohol treatment from 1-9dpf, nor was body weight at 6 months (Fig 2). No animals died during the course of ethanol treatment, subsequent survival was similar in treated and control animals.
Visually, the treated animals had no craniofacial deformity or any body oedema compared to controls, and appeared normal, with normal locomotor activity.
Tissue cortisol and the effects of stress
In larvae at 9dpf, there were no differences in cortisol between unstressed control and unstressed ethanol treated animals. However, after air-exposure stress, the increase in cortisol seen in the animals previously exposed to both 20mM and 50mM ethanol was significantly lower than controls (Fig 3), moreover, whereas cortisol in the 20mM group was still increased by stress, cortisol in the 50mM group was not significantly different either from the unstressed controls, or the unstressed 50mM group.
Six month old adult zebrafish were used in a pilot study to verify the effects of air stress on cortisol concentration treatment. Whole-body cortisol concentration following 1min air exposure was not different from that obtained from unstressed control animals (Fig 4). A 30s air exposure yielded a significant increase in tissue cortisol concentration (Fig 4) and therefore a 30s air stress was adopted as standard procedure.
There were no differences in body cortisol content between control animals and those previously treated with ethanol when unstressed. Air stress provoked a significant enhancement of body cortisol in the control animals and in the 20mM ethanol group, and these two responses were similar. However, animals that had been treated with 50mM ethanol showed a significantly attenuated response to the stress compared with the stressed controls, although the cortisol values were nevertheless greater than in the unstressed 50mM group (Fig 5).
Discussion
The impact of exposure to alcohol during mammalian development is known to be profound, and may lead to a wide range of structural, physiological and behavioural dysfunctions, culminating in humans in FASD. These are incompletely understood [18].
To further our understanding, a simpler and more tractable animal model has been sought in the zebrafish, already widely used in many aspects of normal physiology and in disease, as well as for drug screening. In this context several recent papers have examined the effects of developmental ethanol exposure on zebrafish behaviour (e.g. refs. [29,33,34]), invariably drawing attention to the parallels with mammalian species, including the human.
In mammals, the HPA axis has been implicated in the processes involved in the development of addiction and relapse, though its possible roles and the mechanisms involved remain obscure [1,5]. The aim of the present study was therefore to extend the study of developmental ethanol exposure to its effects on the stress response of the zebrafish HPI axis.
The ethanol concentrations used were selected by first assessing body uptake of ethanol from environmental water. Before 48hpf the embryo was unprotected from ethanol penetration (Fig 1), but subsequently tissue concentrations were only a fixed fraction of the ambient ethanol concentration. This fraction, 36%, was the same at ambient concentrations of both 20mM and 100mM, and it can be concluded that the ambient/tissue ethanol content is linearly related. Following this assessment, the experimental concentrations used of 20mM and 50mM were those calculated on this basis to deliver tissue concentrations that could reflect those occurring in moderate alcohol use in humans, maximal tissue ethanol concentrations were about 40μg/mg tissue. doi:10.1371/journal.pone.0124488.g002 Fish exposed to ethanol from 1-9dpf showed no difference from controls in size or body weight (Fig 2) or locomotor activity (not shown) at the end of this period. These results are consistent with the literature, and for example in the study by Reimer et al [31], exposure to up to 150mM ethanol from 3-48hpf or 3-24hpf gave no changes in development or morphology, either at 5dpf or in the 6 month adults.
While there may be various mechanisms that affect circulating cortisol levels at the margin, only one is paramount, that is the integrated function of the HPA or HPI axis. It is circulating cortisol itself, and cortisol alone that reflects the totality of HPA(I) function. It is for this reason that circulating cortisol is almost universally used as the only reliable measure of stress responses, in fish as well as in mammals (e.g. refs [35][36][37][38]). Collection of plasma is not feasible in studies of acute stress in zebrafish, and therefore whole body cortisol content was adopted as a reasonable alternative, perhaps comparable to plasma data from rodents and humans. Others have made the same choice [39]. While tissue values may not necessarily reflect rates of secretion, in mice the evidence suggests that (with some exceptions) the two are largely related [40]. Nevertheless, it is always possible that local steroid metabolism, for example by 11βhydroxysteroid dehydrogenase type 2, may affect local tissue cortisol concentrations, and thus behaviour, or brain corticotrophin releasing hormone and the HPA [41].
From Figs 3 and 5 it is clear that developmental exposure to ethanol had no effect on basal tissue cortisol content in either larvae or adults. What is striking is its persistent effect on the response to acute stress. There is a characteristic elevation in tissue cortisol after a short air-exposure stress-confirming the validity of using tissue cortisol as an index of HPI activity. There is a characteristic dynamic in this response, and in adults it is quite short-lived (Fig 4), necessitating sampling after just 30s stress. In both larvae and adults at 6 months, early exposure to ethanol attenuated, and in some cases actually eliminated altogether, the HPI response to stress, as shown by tissue cortisol content (Figs 3 and 5). Although the treated larvae were still Stress-reactivity measured by whole-body cortisol concentration of zebrafish 6-month-old adults. Animals were either flash frozen or stressed, either by air exposure at a specific time or acute exposure to ethanol (1%), followed by freezing 6 minutes later. Zebrafish adults showed increased cortisol concentration when air exposed for 30s, but not after 60s exposure. Means ± SE, 3 batches (on different days) of 3 samples of 10 animals were used. P<0.01, ANOVA <0.01).
doi:10.1371/journal.pone.0124488.g004 maintained in ethanol when these tests were carried out, the adults had been free of ethanol since 9dpf.
Physiologically and morphologically the treated animals are indistinguishable from normal, as noted above, and this extends to the unstressed function of the HPI. It is the response to stress alone that is impaired. The mechanisms here are currently obscure, though perhaps endocrinological, caused by defective corticotrophin or corticotrophin releasing hormone function, or by impaired steroidogenesis. At larval stages, since they remained exposed to ethanol, it could possibly be associated with its anxiolytic action. However, it is the persistence of the effect into the adults that suggests that a profound change has occurred during development in the treated animals, and this must be linked to irreversible structural or functional changes, Fig 5. Stress-reactivity measured by whole-body cortisol concentration of zebrafish 6-month-old adults. Treated animals were developmentally exposed to ethanol from 1dpf-9dpf with 20mM or 50mM ethanol concentrations. At 6 months animals were either flash frozen immediately or air exposed and frozen 6 minutes later. Zebrafish adults showed decreased cortisol content with increasing ethanol concentration exposure during development. Means ± SE, 9 batches of animals per group. *P<0.05 **P<0.01 "t" test and ANOVA. presumably in the brain. A similar persistence of the behavioural consequences of early ethanol exposure in zebrafish leads to the same general conclusion [29,33,34]. Accordingly it is now appropriate to address the mechanisms that link behavioural effects to the HPI in these animals.
It is remarkable that this is similar to the conclusions reached by others in the mammalian and indeed in the human context, in which the HPA is thought to be involved in the effects of early ethanol exposure [1,5]. There are discrepancies, in mammals developmental alcohol generally leads to a hyperresponsive HPA axis [21,22,[42][43][44], though not invariably [23,24].
Nevertheless, as in mammals, it does seem likely that the zebrafish HPI is particularly susceptible to ethanol during development. Further study of its mechanism of action may yet throw light on what in mammals is so difficult to understand. | 4,905.6 | 2015-04-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Walking, Cycling and Driving to Work in the English and Welsh 2011 Census: Trends, Socio-Economic Patterning and Relevance to Travel Behaviour in General
Objectives Increasing walking and cycling, and reducing motorised transport, are health and environmental priorities. This paper examines levels and trends in the use of different commute modes in England and Wales, both overall and with respect to small-area deprivation. It also investigates whether commute modal share can serve as a proxy for travel behaviour more generally. Methods 23.7 million adult commuters reported their usual main mode of travelling to work in the 2011 census in England and Wales; similar data were available for 1971–2001. Indices of Multiple Deprivation were used to characterise socio-economic patterning. The National Travel Survey (2002–2010) was used to examine correlations between commute modal share and modal share of total travel time. These correlations were calculated across 150 non-overlapping populations defined by region, year band and income. Results Among commuters in 2011, 67.1% used private motorised transport as their usual main commute mode (−1.8 percentage-point change since 2001); 17.8% used public transport (+1.8% change); 10.9% walked (−0.1% change); and 3.1% cycled (+0.1% change). Walking and, to a marginal extent, cycling were more common among those from deprived areas, but these gradients had flattened over the previous decade to the point of having essentially disappeared for cycling. In the National Travel Survey, commute modal share and total modal share were reasonably highly correlated for private motorised transport (r = 0.94), public transport (r = 0.96), walking (r = 0.88 excluding London) and cycling (r = 0.77). Conclusions England and Wales remain car-dependent, but the trends are slightly more encouraging. Unlike many health behaviours, it is more common for socio-economically disadvantaged groups to commute using physically active modes. This association is, however, weakening and may soon reverse for cycling. At a population level, commute modal share provides a reasonable proxy for broader travel patterns, enhancing the value of the census in characterising background trends and evaluating interventions.
Introduction
In recent years, promoting walking and cycling for transport ('active travel') has moved up multiple policy agendas, including in relation to health, transport and climate change. Active travel provides one route whereby people can integrate moderate-tovigorous intensity physical activity into their everyday lives [1][2][3], and participating in active travel is independently associated with a wide range of health benefits [4][5][6][7]. Active travel is also more likely than recreational physical activity to displace journeys by cars [8], which in turn is expected to reduce noise, congestion, road traffic crashes, urban air pollution and the emission of greenhouse gases [9][10][11].
Despite these potential benefits, levels of walking and cycling declined in the second half of the twentieth century in Britain, while motorised transport increased [12]. The past two decades have, however, seen some hints that these trends may be at least partially reversing. The UK is one of various high-income countries in which levels of car use have flattened or slightly declined, as have the proportion of adults holding a driving licence [13][14][15]. Simultaneously, much greater policy focus has been given to promoting and investing in active travel, often particularly in relation to cycling [3,[16][17][18][19][20][21]. In London, successive Mayors have launched initiatives both to encourage cycling (e.g. a bicycle sharing system) and to discourage driving (e.g. the introduction of a 'congestion charge' for cars entering central London). Nationally, initiatives have included the publication of an Active Travel Bill in Wales and an Active Travel Strategy for the UK [19,21]; the allocation of £1 billion to local sustainable transport initiatives; and the implementation of town-wide initiatives in 18 'cycling towns' [22]. Such interventions may explain the upward trend in cycling reported in London [23] and in the original six cycling towns [22]. The first aim of this paper is to contextualise these setting-specific findings using newly-released census 2011 data. Specifically, I aim to examine national and regional levels and trends of walking, cycling and driving to work in England and Wales. A second aim is to examine changes in the distribution of these different commute modes with respect to small-area deprivation. In 2010, the Strategic Review of Health Inequalities in England called for research to monitor the social gradient of active travel [24]. This call was prompted by data in the original six 'cycling towns' indicating that higher social grade was associated with a higher probability of reporting any past-week cycling [22]. Similarly in London, higher household income is positively associated with making at least one trip by bicycle on any given day [25], while higher area affluence is positively associated with using the bicycle sharing system [26]. In a previous analysis of census data from 1971-2001, individuals from lower social classes were more likely to walk or cycle to work but this effect became less strong over time [27]. This paper examines whether this trend has continued, and therefore whether changes in commuting patterns might tend towards widening health inequalities.
The final aim of this paper is methodological. The UK census is publically available and provides a uniquely large and representative source of information, with very high geographical resolution. It therefore provides one potentially powerful means of examining trends in travel behaviour and/or evaluating the impact of interventions, particularly those made at a sub-regional or local level. The census is, however, severely limited in including only one question on travel behaviour, namely 'usual main commute mode'. By contrast, most research studies and policy evaluations are more interested in total travel behaviour. The value of the census data therefore depends considerably on how far it can be used as a proxy for total travel behaviour, at least at the population level. This paper uses National Travel Survey data to examine this issue, as well as to contextualise the census data in other ways.
Census Data on main Commute Mode in England and Wales
The British census happens every ten years and is compulsory for all residents. In England and Wales, the estimated proportion of people covered by the census was 96% in 1991, 94% in 2001 and 94% in 2011 [28,29] This paper takes the 2011 census as its starting point (data available from www.ons.gov.uk/ons/guidemethod/census/2011/index.html) and makes comparisons with previous censuses (data available from http://casweb.mimas.ac. uk). Ethical approval was not required as all data are fully in the public domain.
For all respondents aged 16-74 with a current job, the 2001 and 2011 census data includes responses to the question ''How do you usually travel to work? (Tick one box only, tick the box for the longest part, by distance, of your usual journey to work)''. This data is also available for a 10% random sample of the 1971, 1981 and 1991 censuses (see File S1 for details of minor differences in the 1971 and 1981 response options). I categorised responses into five commute modes: cycling; walking; public transport; private motorised transport (car, van or motorcycle, as a driver or passenger); and other modes. I calculated the modal share of each of these modes as a proportion of all commuters, i.e. excluding people not in work or people working at or from home. All adults reporting that their home address was also their place of work were treated as non-commuters. Note that this final decision was necessary to allow comparable analyses across the censuses, but differs from some previous stand-alone analyses of census 2011 data ( [30], see File S1 for details).
Small-area Deprivation, Adjusting for Geographical Remoteness
The 2010 English Index of Multiple Deprivation (IMD) [31] is a weighted composite of small-area data relating to seven deprivation domains, assigned at the level of lower super output areas (LSOA, average population around 1500). There is also a 2011 Welsh IMD [32], but differences in the constituent domains and variables mean that the two scores are not directly comparable. I therefore created hundredths of deprivation separately in England and Wales and combined these into a single variable capturing each LSOA's ranking within its country.
The standard IMD score includes a small number of indicators capturing distance to services (e.g. distance to the nearest post office). This complicates interpretations of associations with commute mode, since these indicators may serve as a straightforward proxy for average commute distance. I therefore created an 'IMD-minus-distance to services' score, employing an approach that has been used elsewhere to remove particular domains from the overall score [33,34] (see File S1 for details). All substantive findings were unchanged in sensitivity analyses which used only the income deprivation domain.
To adjust for geographical remoteness in the equity analyses, I used rankings on the IMD 'distance to services' subdomain. I also used the 2004 Rural and Urban Area Classification [35] to assign settlement type (three-level categorical variable: urban area with a population .10,000; smaller towns and fringe areas; and villages, hamlets and isolated dwellings); and to assign sparseness (binary variable denoting whether the LSOA was in the bottom 5% for population density in the surrounding 30 km).
The National Travel Survey
The National Travel Survey is a continuous, population-based survey of households in Britain (annual sample size around 8100 households in recent years, household participation rate around 60% [36]). This paper uses National Travel Survey data for fullyparticipating adults (aged 16 years or over) from 2002 to 2010 (available from http://www.esds.ac.uk). All members of participating households complete questionnaires, which cover the usual main commute mode for all working participants. These questionnaires are also used to create fifths of real household income equivalised for household composition [37]. All participants additionally complete one-week travel diaries that include the time taken and distance travelled for all stages of most trips. Motor vehicle trips off the road network are excluded (e.g. on private land), as are walking and cycling trips where the surface is unpaved or access is restricted (e.g. on private land, across open countryside or in a park that is closed at night) [38].
I first used data from trip stages in the National Travel Survey to examine what proportion of total travel time in each mode was captured directly by the question on 'main mode to work'. For example, I calculated what proportion of the total time spent cycling by adults was accounted for by commute trips made by individuals who reported cycling as their usual main mode. I then created 150 non-overlapping subpopulations within the National Travel Survey based on 10 regions (9 standard English regions plus Wales), three time periods (2002-2004, 2005-2007 and 2008-2010) and the five income fifths (10*3*5 = 150). For each subpopulation, I calculated the proportion of participants report-ing each mode as their usual main method of travelling to work ('commute modal share'). I also used data from trip stages to calculate the proportion of total travel time spent in each mode ('total modal share'). This allowed me to examine how far the commuting data available in the census predicted the more general outcome of 'total travel'.
Statistical Analyses
Most analyses rely on the presentation of raw percentages (plus binomial proportion confidence intervals) or raw Pearson correlation coefficients. When analysing the National Travel Survey data, I calculated commute modal share and total modal share for each subpopulation using the household-, individual-and trip-level weights provided. These weights adjust for factors such as differential non-response rates by age, sex and region, and for the fact that participants only reported short walks (,1 mile) on the final day [36]. I then present raw correlation coefficients between commute and total modal share for these 150 subpopulations. The results were very similar if each subpopulation was weighted for its population size (mean 830 commuting adults, range 231-1765).
For the equity analyses using census data, I fitted linear regression models with commute modal share as the outcome (e.g. proportion commuting by bicycle) and with twentieth of smallarea deprivation as the main predictor variable. LSOAs were the unit of analysis, and I accounted for spatial autocorrelation by fitting two-level random intercept models of LSOAs nested within local authorities (equation in File S1). I adjusted these models for settlement type, sparseness and IMD 'distance to services' rank, entering the former two as categorical variables and the latter one using linear plus quadratic terms. I used Stata 12 for all statistical analyses, and used ArcGIS 10.1 to create maps.
Results
National Levels and Trends in the 2011 Census 41.1 million adults aged 16-74 took part in the 2011 English and Welsh census, of whom 14.6 were not in employment, 2.8 million worked at or from home, and 23.7 million commuted to work. Table 1 presents the distribution of their usual main commute modes, while Figure 1 compares these to the previous four censuses. Commute modal share was dominated by private motorised transport: cars, vans or motorcycles represented the usual main mode of 67.1% of commuters (66.4% in England, 79.4% in Wales). This was followed by public transport (17.8%) and walking (10.9%), and finally by cycling (3.1%) and 'other' modes (1.1%).
Although most commuters still reported using private motorised transport, the trends suggested that this mode might be reaching saturation and perhaps starting to decline. In England, the decade between 2001 and 2011 saw a modest decrease in private motorised transport (21.9 percentage points) and a concomitant increase in public transport (+1.9%, with this effect being driven by an increase in train commuting: see Table 1). Although these changes are relatively small in absolute terms, they acquire some additional importance when considered in light of the longer-term trends in the opposite directions ( Figure 1). In this context, even the marginal changes in walking (20.07%) and cycling (+0.09%) are somewhat encouraging when compared to the comparatively large declines in previous decades. As for Wales, it differed from England in that private motorised transport continued to increase and walking showed a more marked decrease. These changes occurred at a slower rate than in previous decades, however, suggesting that in future years these trends may stabilise or even reverse in Wales (as already appears the case for public transport).
Finally, it is worth noting that decreases in private motorised transport were largely or entirely confined to commuting as a car/ van passenger or by motorcycle (Table 1). Driving oneself to work by car or van (which accounted for the vast majority of private motorised transport) showed only a very small decrease in England (20.3%) and a notable increase in Wales (+3.6%). This suggests that changes in the proportion of commuters putting cars on the road (and therefore contributing to congestion, air pollution and road traffic crashes) have been less favourable than the changes in overall private motorised commuting presented in Figure 1. Figure 2 shows how these overall levels and trends in cycling and walking to work varied across England and Wales, and also which areas showed the greatest increases relative to 2001. For cycling, London stood out as the only region to have experienced a marked increase (+1.7%, versus 20.6% to +0.2% in all other regions), an increase largely concentrated in inner London. This led London to overtake the East of England as the region with the highest cycle commute modal share. For walking there was less variation at the regional level, both in absolute walking levels and in the change since 2001. At a local level, the highest levels of walking and cycling (both 60%) were in the only two local authorities with a commuting population under 5000 (the Isles of Scilly and the City of London). Apart from these, the local authorities with the highest levels of cycling were the university towns of Cambridge (32.6%, 4.2 percentage point increase from 2001) and Oxford (19.1%, 2.8% increase) and the London borough of Hackney (15.4%, 8.5% increase). Bristol also stood out alongside London as a large city (population 430,000) which had substantially increased its modal share (8.2%, 3.3% increase from 2001). The local authorities with the highest levels of walking were the small, historic cities of Norwich (24.8%, 0.5% increase from 2001) and Exeter (24.1%, 3.8% increase).
Regional Levels and Trends in the 2011 Census
Equivalent maps for travel by private motorised transport and public transport are presented in Figure S1 in File S2, while File S3 tabulates all 2001 and 2011 commute modal shares for all local authorities. At a regional level, London was an outlier, with much higher levels of commuting by public transport than other regions (53%, vs. 7-14% elsewhere) and much lower levels of private motorised transport (32%, vs. 71-79% elsewhere). This difference was more pronounced in 2011 than a decade previously, reflecting the fact that London had showed the largest regional increases in public transport commuting since 2001 (+7.3%) and the largest decreases in private motorised transport (28.8%). Yet while these changes were largest in London, the other southern regions also showed increases in public transport (+0.2 to +2.0%) and decreases in private motorised transport (20.6 to 21.9%). By contrast the opposite was generally true in the Midlands, Wales and the North of England. At the local authority level, the highest levels of commuting by private motorised transport were all in rural areas, with the highest proportion in East Dorset (88.5%, 0.8% increase since 2001).
Equity Analyses: Socio-economic Distribution of Commute Modes
Across the entire gradient for small-area deprivation, greater affluence was associated with a higher proportion of commuters using cars, vans or motorcycles as their main mode (see Figure S2 in File S4 for raw data, see the left panel of Figure 3 for multi-level models adjusting for geographical remoteness). Simultaneously, greater affluence was progressively associated with a lower Table S1 in File S2, along with subcategories of motorised modes and 95% confidence intervals. Confidence intervals are not presented here as they are too narrow to be visible. doi:10.1371/journal.pone.0071790.g001 proportion of commuters walking or (except for a slight reversal in the very most affluent areas) using public transport. For example, the raw proportion of commuters using walking as their main mode was 6.7% in the most affluent tenth versus 15.4% in the most deprived tenth, translating into an adjusted difference of 27.5 percentage points (95%CI 28.0, 27.0). Cycling was fairly equal across the socio-economic gradient but was also slightly more common in deprived areas, with an adjusted difference of 20.60% (95%CI 20.77, 20.44) between the most affluent versus the most deprived tenth. Very similar patterns of commute modal share were seen across fifths of household income in the National Travel Survey in 2008-2010, the only notable exception being a more marked increase in public transport commuting among the most affluent income fifth (see Figure S2 in File S4). This broad similarity suggests that the associations observed in the census with respect to small-area deprivation may also apply with respect to individual-level measures of socio-economic position. Although greater affluence predicted lower walking, public transport use and cycling in the 2011 census, this was less true than it had been a decade earlier. As shown in the right-hand panel of Figure 3, increasing affluence progressively predicted an increase in these three modes between 2001 and 2011, and a decrease in private motorised transport. These findings were unchanged when using earlier IMD versions, all of which were highly correlated (e.g. r = 0.98 between the 2004 and 2010 versions).
The pattern of findings was very similar when analysing England and Wales separately, and these gradients were also generally apparent within local authorities. For example, the average within-local-authority association between commute mode and affluence was significantly positive for private motorised transport, significantly negative for walking and public transport, and marginally-significantly negative for cycling (see Table S2 in File S4). For cycling, however, Cambridge, Oxford and Hackney were notable exceptions and showed strong positive associations between greater affluence and greater cycle commuting (see Figure 4). Similarly, Greater London was the only region of England or Wales where the average within-local-authority gradient was significantly positive, and there was also a modest positive gradient in Bristol (the largest city to have experienced a substantial cycling increase). Thus not only had the negative socioeconomic gradient for cycling flattened over time, but it was inverted in England's highest-cycling areas and in its highestcycling region.
Setting the Census Findings in Context: Data from the National Travel Survey
Thus far, this paper has made comparisons across years, across regions and across socio-economic groups with respect to the only travel data available in the census, namely usual main mode for commuting to work. This final section uses National Travel Survey data to examine how these findings can be expected to reflect differences in travel behaviour more widely. A useful starting point is to consider what proportion of total travel time in each mode is directly captured by the census. Among adult participants in 2008-2010, 31% of all cycling time was reported during commute trips by individuals who stated that cycling was their 'usual main commute mode'. A further 10% of all cycling time was reported during commute trips made by adults who gave a different usual main mode, i.e. capturing people who used cycling as part of a multi-modal trip or who cycled only occasionally. The remaining 59% of all cycling time was reported during non-commute trips (this includes any cycling by adults not in employment).
The cycling picked up by the census question therefore corresponds to around a third of total adult cycling time. This proportion was similar for public transport (30% vs. 4% during other commute trips and 66% during non-commute trips), but was lower for car use (20% vs. 2% and 78%) and very low for walking (6% vs. 8% and 86%). Indeed, slightly less time was reported walking in commute trips where walking was the usual main mode than was reported during other commute trips (6% vs. 8%). Twothirds of this 'other commute' walking were accounted for by multimodal public transport trips. Both here and for the analyses reported below, these findings were very similar when using travel distance instead of time.
Although capturing only a minority of total travel time, the census question served as a reasonably good proxy measure for total modal share at the population level. This is indicated in , which presents correlation coefficients of 0.77-0.96 between the commute modal share and the proportion of total travel time spent in that mode. These correspond to R 2 values of 0.59-0.92, i.e. across these 150 populations commute modal share explained between 59% and 92% of the variance in total modal share. Visual inspection indicated that populations defined by region, year band or income all seemed to share broadly the same distribution (see Figures S3 and S4 in File S5). The only major exception was that high levels of public transport meant that total walking levels were higher than expected in London, hence the decision to highlight correlation coefficients excluding London in Figure 5.
Interestingly, over the observed range of commute modal shares for public transport and car use, the line of best fit of the scatter graphs in Figure 5 was reasonably similar to the line of identity (i.e. intercept zero, gradient one: see Table S3 in File S5 for equations for lines of best fit). In other words, if 20% of a population used public transport as their usual main commute mode, that population also spent approximately 20% of its total travel time in public transport. By contrast, for cycling and walking the lines of best fit differed more markedly from the line of identity. Instead a given commute modal share predicted a smaller share of total travel time for cycling and a larger share for walking.
Relative versus Absolute Measures of Travel Time
It is important to remember that the findings presented in the previous section all relate to modal share, i.e. the relative proportion of travel by different modes. A final contribution of National Travel Survey data is to caution that such relative differences do not necessarily correspond to equivalent absolute differences, because populations may differ in their absolute trip rates or travel time. This is not a major issue for the regional and temporal comparisons because average daily travel times showed relatively little variation across regions (e.g. ranging from 55-65 min across all regions in 2008-2010, except in London where it is up to 69 min) or over time (e.g. ranging from 64 min in 2002-2004 to 62 min in 2008-2010). It is, however, very important for the socio-economic comparisons because total travel time showed a strong dose-response association with income. For example, total daily travel time ranged from an average of 51 min/day among adults living in the lowest income fifth in 2008-2010 to 59 min/ day in the middle fifth and 77 min/day in the highest fifth.
As a result, although the proportion of active travel time was greatest in low income groups (24%, 18% and 15% among the lowest, middle and highest fifths), absolute active travel time showed much less difference (15 min/day, 12 min/day and 13 min/day among the lowest, middle and highest fifths: see Table S4 in File S5 for analyses treating walking and cycling separately). Conversely, the association between high income and percentage travel time in private motorised modes became even larger when converted into absolute travel times (50%, 67% and 70% for the proportion of travel time among the lowest, middle and highest fifths; 25 min, 39 min and 53 min for absolute daily travel time). A similar point can be made in relation to the 2011 census. Although this paper always uses 'all commuters' as a denominator, one could instead use 'total adult population' if one wanted to focus on absolute volumes of commuting travel. Given that the proportion of adults in employment was higher in more affluent areas (e.g. 70% vs. 54% in the most vs. least affluent fifth), using this alternative denominator would attenuate the socioeconomic gradient in active commute modes and strengthen the gradient in private motorised transport.
Discussion
The 2011 census indicates that private motorised transport continues to dominate commuting in England and Wales, representing 67% of usual main commute modes. This contrasts with modal shares of 18% for public transport, 11% for walking, and 3% for cycling. Somewhat more encouragingly, the long-term increase in private motorised commuting has halted across England and Wales as a whole (and even shown a small decline), while public transport, walking and cycling have risen or remained relatively stable for the first time in decades. With respect to socioeconomic position, higher affluence continues to predict a lower commute modal share of walking and, to a marginal extent, cycling. Nevertheless these negative gradients have flattened over time and the gradient for cycling is reversed in the highest cycling locations. Because affluent individuals travel more in total, these socio-economic associations with commute modal share cannot be assumed to correspond directly to associations with absolute travel times or distances. Nevertheless, commute modal share does generally appear to be a reasonably good proxy measure (at the population level) for the relative proportion of travel time spent in different modes.
Strengths and Limitations
In interpreting these findings, it is important to consider this paper's strengths and limitations. A key strength is the integration of data from complementary sources. The census represents a national sample with a uniquely high response rate, and therefore maximises power and generalisability. By contrast, alternative data sources such as the National Travel Survey or London Travel Demand Surveys have smaller sample sizes (18,000-19,000 individuals in 2009/10) and more potential for participation bias (response rates 52-60%) [36,39]. These other datasets do provide much richer travel information, however, hence my use of National Travel Survey data to contextualise and partially overcome some of the census's limitations.
The greatest limitation of the census is that it only covers travel for commuting and only covers the 'usual main mode of travel' for these commute trips. As I demonstrate using National Travel Survey data, this only captures a minority of total travel time (6-31%, depending on mode). Even when considering only commute journeys, the census question captures less than half of all commute walking time and only three-quarters of all commute cycling time. Although I demonstrate that commute modal share generally provides a reasonable proxy for total modal share, this may not be true in settings with distinctive transport characteristics (e.g. a high density of park-and-ride facilities, and therefore many multi-modal commute journeys).
Another limitation is that the British census is only conducted every ten years, and therefore cannot be used to examine the precise timing of changes in commuting patterns. In addition, the census 2011 data is currently only available at the small-area level, meaning I could only present equity analyses with respect to area deprivation. Reassuringly, this showed a broadly similar pattern to individual-level analyses of income in the National Travel Survey. Nevertheless, other socio-economic indicators may show different patterns of association [40], and multiple indicators may be needed to characterise fully the socio-economic structure of commuting [41]. It would therefore be valuable to complement the equity analyses presented here with an examination of individual socio-economic and demographic predictors of commute modal share, once samples of individual anonymised records are released. In addition, individual-level analyses could build upon this paper by examining who is changing their travel behaviour, for example whether middle-aged men show the largest increases in cycling, as suggested by previous national surveys [27,42]. Future analyses could also explore associations with geographic factors such as hilliness, climate and land use patterns; although outside the scope of this paper, these may play a key role in explaining local and regional variation [43].
Implications of Levels and Trends in Different Commute Modes
This paper adds to the evidence that, after increasing for decades, levels of car use in England and Wales may now flattening or declining [13][14][15]. If so, this suggests that forecasts by the Department for Transport may overestimate future demand for car travel by assuming that this demand will continue to increase [14]. This finding also offers some hope for the prospect of creating a more physically active and less environmentally polluting transport system. Nevertheless, the 2011 census underlines the scale of the challenge faced in achieving this, with twothirds of the working population currently using cars, vans or motorcycles as their usual main commute mode.
An equivalently mixed picture is offered with respect to cycling levels. On the one hand, even the very small national increase in cycling is something to celebrate when compared to previous decades of decline. London in particular stands out as a region that has achieved an impressive increase in its cycle commute modal share over the past decade. Nevertheless cycling continues to be very rare in most parts of England and Wales, and so is not realising its potential to confer substantial health and environmental benefits [44,45]. Among other things, this suggests that the examination of relative inequalities in this and other reports should not distract from the fact that cycling is (too) rare (and driving (too) common) in all socio-economic groups. Similarly, although cycling often gets more attention from policy-makers and academics, these census data serve as a reminder that walking is a far more common source of active commuting. This is particularly the case given the evidence in this and previous [46] reports indicating the large volume of walking accumulated during multimodal public transport trips.
Implications of the Socio-economic Patterning of Different Modes
This paper confirms previous research indicating that motorised transport (and associated carbon emissions) are higher among socio-economically advantaged groups [47][48][49]. This may be relevant when considering the best policy options to shift to a lowcarbon transport economy. For example, it might be that fuel or parking charges would have to rise considerably to have a substantial effect upon travel demand in these more affluent groups [47], and that effective and equitable policies would also need to include other measures (e.g. increasing the supply of attractive alternative commute options) [49].
By contrast, the modal share for active commuting was lower in more affluent areas, thereby contrasting with many other health behaviours such as smoking, poor diet or leisure-time physical activity [40,[50][51][52]. This at first seems at odds with the concern raised in the recent Strategic Review of Health Inequalities in England that differential participation in active travel might tend to widen health inequalities [24]. One reason for this difference is that the Strategic Review focussed on cycling, which showed a much flatter socio-economic gradient in the census. This therefore again highlights the importance of considering walking when looking at population-level sources of active travel, and particularly when considering more deprived areas [53]. Secondly, some previous studies have considered recreational as well as transport walking and cycling [22,26]; these two may have different correlates, with the former being more likely to show a positive association with affluence [40]. Thirdly, some previous evaluations have focussed on locations like Cambridge (e.g. [41]), which this paper shows to be atypical in having a higher cycle modal share among people from more affluent areas.
A final, key factor is that the focus in this paper has been the relative measure of 'modal share', whereas previous studies have examined absolute measures such as 'total travel time' or 'any participation' [22,25,26]. As demonstrated in this paper, these two approaches may generate qualitatively different associations with socio-economic position, and research in this field therefore needs to distinguish clearly between travel modal share and total travel volume. A focus on modal shares is likely to be more meaningful for some research questions, for example when evaluating the impacts of interventions targeting modal choice. From a broader perspective, however, it is important not to lose sight of socioeconomic differences in total travel time. Ignoring these differences may risk overstating the physical activity benefits accruing to the poor or understating the harms generated by motorised transport among the rich.
Creating an equitable transport system therefore needs to focus not only on equalising access to different modes, but also on equalising access to the potential for travel in general [49]. This is arguably particularly important given that the 2011 census suggests a continuation of the trend for the modal shares of walking and cycling to increase more rapidly among socioeconomically advantaged groups [27]. Moreover, those areas that have successfully attained or maintained a high cycling modal share are also precisely the areas where cycling is most concentrated among the affluent. These two findings suggest that in the future cycling may become increasingly concentrated among more affluent groups, both in terms of modal share and, to an even greater extent, in terms of time spent cycling. To the extent that transport policies accelerate or diffuse these and other trends outlined in this paper, they may widen or narrow inequalities with respect to a range health and social outcomes [54,55].
Methodological Implications
Besides highlighting the need to distinguish between relative and absolute measures of travel, this paper makes a methodological contribution through examining correlations between commute modal share and total modal share. It is important to stress that this paper only examines the strength of these associations at the population level. At the individual level the associations may be weaker, particularly as the individual-level determinants of modal choice for commuting often differ in important ways from those governing other journey purposes (e.g. [56,57]).
Nevertheless, except when comparing walking levels between London and elsewhere, population-level commute modal share does appear a reasonable proxy for the proportion of total travel time that adults in that population spend in that mode. This suggests that the census data can cautiously be used as an indicator for travel behaviour in general, which in turn enhances their value for evaluating transport interventions implemented at the local or regional level. This paper therefore highlights the potential power of the census not only to characterise 'the state of the nation', but also to evaluate attempts to shift that 'state' to one which is better for public health and the environment.
Supporting Information
File S1 Further details on methods.
(DOC)
File S2 Tabulation of results and additional analyses: national and regional trends. This file contains Table S1 and Figure S1. Table S1, Modal share of usual main commute modes among commuters in England and Wales (percent and 95% confidence interval). Figure S1
(XLS)
File S4 Additional analyses: equity. This file contains Table S2 and Figure S2. Table S2, Average adjusted change in commute modal share per percentile increase in affluence, in 346 local authorities of England and Wales. Figure S2, Comparison of commute modal share a) in the census by small area deprivation and b) in the National Travel Survey by equivalised household income. (DOC) File S5 Additional analyses: data from the National Travel Survey. This file contains Table S3, Table S4, Figure S3, and Figure S4. Table S3, Parameters of lines of best fit (univariable regression) between commute modal share (x variable) and share of total travel time (y variable). Table S4, Distribution across fifths of equivalised household income of a) the relative proportion of total travel time in different modes and b) the absolute average daily travel time in different modes: data from the National Travel Survey 2008-2010. Figure S3, Association between commute modal share and modal share of total travel time in 150 populations defined by region, year band and income fifth, distinguishing sub-populations by year. Figure S4, Association between commute modal share and modal share of total travel time in 150 populations defined by region, year band and income fifth, distinguishing sub-populations by income. (DOC) | 8,692.6 | 2013-08-21T00:00:00.000 | [
"Economics"
] |
Multi-strategy evolutionary games: A Markov chain approach
Interacting strategies in evolutionary games is studied analytically in a well-mixed population using a Markov chain method. By establishing a correspondence between an evolutionary game and Markov chain dynamics, we show that results obtained from the fundamental matrix method in Markov chain dynamics are equivalent to corresponding ones in the evolutionary game. In the conventional fundamental matrix method, quantities like fixation probability and fixation time are calculable. Using a theorem in the fundamental matrix method, conditional fixation time in the absorbing Markov chain is calculable. Also, in the ergodic Markov chain, the stationary probability distribution that describes the Markov chain’s stationary state is calculable analytically. Finally, the Rock, scissor, paper evolutionary game are evaluated as an example, and the results of the analytical method and simulations are compared. Using this analytical method saves time and computational facility compared to prevalent simulation methods.
Introduction
Today, Evolutionary Game theory (EGT) is a progressive topic in many branches of science from economy to biology [1][2][3][4][5][6][7][8][9][10]. EGT provides powerful tools for many problems in which the system's dynamics depend on the interaction between agents. The interactions between strategies are often described by evolutionary games. The performance of strategies in evolutionary games is determined by the game's payoff matrix, which determines each strategy's spread rate. Greater payoff in the game leads to more tendency to spread in the population for any strategy. In an infinite well-mixed population, dynamics of the system is governed by a deterministic equation called replicator equation [11,12], but in a finite population the dynamics is stochastic [13][14][15][16][17][18][19][20][21].
In a stochastic evolutionary game, the population is divided into several strategies and individuals interact with each other based on their strategies. The process is advanced by discrete time steps. In each time step, the frequency of strategies changes by one or remains unchanged. The game's payoff matrix and frequency of each strategy identify the probability of events at each time step. Another factor that influences the dynamics of the population is the update rule. Update rule identifies how payoff matrix and frequencies distribute the probabilities of events in each time step. Depending on the update rule, the evolutionary game can be stopped when one of the strategies overcomes all other strategies (fixation), or continues forever. The structure of the population can also affect the dynamics of population. Unfolding an evolutionary game in graph-structured populations is the subject of many investigation [13,[22][23][24][25][26][27][28][29].
In stochastic evolutionary games, the fixation of a strategy is the favorite subject. Numerical simulation is the subject of many studies in finite populations [37][38][39][40]; also there are many investigations that evaluated the dynamics of evolutionary games analytically [18,[41][42][43][44][45][46]. In analytical evaluation, the evolutionary process is often considered as a generalization of the Moran process [47], and it has been done for games with two strategies. The most famous analytical method for analyzing evolutionary games is the recursive equation method [48,49]. In this method, two interesting quantities, fixation probability and fixation time obtain in terms of finite series. Evolutionary games with more than two strategies are not studied analytically so far.
Considering the individual's mutation, the population's dynamics are governed by an evolutionary game with no fixation strategy. So, after many time steps, the configuration of the population reaches a stable state. This steady state is described by a stationary probability distribution which determines after a long run, each configuration of population how much is possible. In both cases (games with fixed strategies and games with no fixed strategies), as the number of strategies increases, more time and computational facilities are needed for simulation of the evolutionary game, so proposing an analytical method for evaluating evolutionary games with more than two strategies is helpful. This study aims to provide an analytical method for obtaining concepts in evolutionary games that getting them by simulation takes long time and needs extensive computational facilities.
Markov chain method has been used for analyzing evolutionary games sincessfully [50][51][52] but it has never been used in an organized and intensive way. In this paper we stabilize the Markov chain method as a reliable method for evaluating evolutionary games. In this method corresponding to each evolutionary game, a Markov chain is introduced. Essential concepts in evolutionary games such as fixation probability, conditional fixation time, and stationary probability distribution are related to concepts in the Markov chain. Using the fundamental matrix method in the equivalence Markov chain, we can calculate essential concepts in the Markov chain, which leads to calculating essential quantities in the evolutionary game. Although this method is designed for a discrete-time system, it could be used for a time-continuous system by considering some approximation.
The organization of the paper is as follows. In general method section we review the Markov chain method and explain a practical theorem for obtaining conditional fixation times which is proven in tha Appendix. In evolutionary game section we establish correspondence between evolutionary games and Markov chains and will clarify how essential concepts in evolutionary games can be obtained from the fundamental matrix method. In result and discussion we apply our approach to an evolutionary game with three strategies. Here the famous rock, scissor, paper evolutionary game is used and results of analytical method and simulations are compared to each other. Conclusion is devoted to a summary and concluding remarks.
Markov chain and fundamental matrix method
In this section, We briefly review the fundamental matrix method in Markov chains and obtain a formula for calculating conditional absorption time. In the next section, by establishing a correspondence between states of the Markov chain and states of the evolutionary game, this theorem provides handy information about the dynamic of the evolutionary process in the fixation path.
A Markov chain is described as S set of states S = {s 1 , s 2 , s 3 , . . .} and a process which starts in one of these states and move to another state. If the chain is currently in state s i , then it moves to state s j with probability denote by p ij . The point is that the probability that the chain moves from state s i to state s j depends on the initial state s i and final state s j not upon which states the chain was in before the state s i . The probabilities p ij constructed the transition matrix P. If v i be a vector that determines the probability distribution in step i, then probability distribution in step i + 1 is v i+1 = v i P. If there are states in the Markov chain that leaving these states is impossible, these states are called absorbing states and Markov chain called absorbing Markov chain. If i be an absorbing state, then p ii = 1 and when the chain is in this state, the Markov chain ends. Other states which are not absorbing are called transient. There are three valuable concepts related to absorbing Markov chain. The first is the probability that the chain starts from transient state i, will be absorbed in absorbing state j (b ij ). The second is absorption time (t i ), the expected number of steps before the chain is absorbed in one of absorbing states, given that the chain starts from state i and the last concept is conditional absorption time (τ ij ), the expected number of steps before the chain is absorbed in the absorbing state s j given that the chain starts in transient state i. It is necessary to emphasize that absorption time differs from conditional absorption time. In fact, absorption time is a weighted average of conditional absorption time among different absorbing states. There is a helpful method for calculating absorption probabilities and absorption time called the fundamental matrix method. In this method, at first the transition matrix is written in the canonical form as follows: In other words, in canonical form, we labeled states so that the absorbing states consider as final states. The so-called fundamental matrix is defined as N = (I − Q) −1 and is useful to obtain absorption probabilities and absorption time. Let us define t i to be the (average) absorption time of the Markov chain starting from state i and r a 1 i , r a 2 i ,. . . the absorption probabilities correspond to absorption states a 1 , a 2 ,. . . starting from state i, respectively. According to the approach in Ref [53] the matrix notation can be used to denote these quantities: : where in the above T is the number of transient states. Using the fundamental matrix method one can obtain absorption probabilities and times as follow: where c = (1, 1, � � �, 1) t . If there is no absorbing state in Markov chain then the Markov chain is called ergodic. In the ergodic Markov chain, it is possible to go from every state to every other state after finite steps. If P is the transition matrix of the ergodic Markov chain then for n ! 1 the P n approach a limiting matrix W with all rows the same vector w, called fixed row vector for P. It means after a long run, the Markov chain reaches an equilibrium which probability that chain be in state j determine by w j . Obviously, wP = w means w is the left null vector of matrix P − I.
In other words, the fixed row vector of P is left eigenvector of P with eigenvalue one. The fundamental matrix method does not represent a recipe for calculating the conditional fixation time. Now we describe a theorem to calculate the conditional fixation time for any absorbing state by adding some details to the fundamental matrix method.
Theorem: Let τ ia be the conditional fixation time for absorption in absorbing state a given that Markov chain starts from transient state i. Using matrix notation, we have where in above equation . . ; and T is the number of transition states. The proof of this theorem present in the Appendix. Also there is a proof with different notation for the theorem in Ref [54].
Evolutionary games corresponding Marokov chains
This section develops a method based on correspondence between Markov chain dynamics and evolutionary game dynamics. This correspondence provides a sound mathematical device for analyzing evolutionary games.
Consider a population with size N which n strategies interact with each other according to a payoff matrix
PLOS ONE
In each time step, the expected payoff of each strategy is obtained in terms of frequency of strategies and payoff matrix as where π(i) is excepted payoff of strategy i and f j is frequency of strategy j. Generally, the expected payoff interpreted as the fitness of strategy in evolutionary game theory, in other words, strategies spread with rates that are proportional to their expected payoff. There are many ways to obtain the fitness of a strategy from its expected payoff, like an exponential payoff to fitness mapping. Depending on the update rule of dynamic, there is a possibility that the evolutionary process leads to the fixation of a strategy which means one strategy overcomes other strategies and occupies the whole population forever. In evolutionary games with fixation strategies, three concepts are noteworthy. Fixation probability, the probability that a strategy fix in population, the fixation time, the average steps of time that an evolutionary process fixed to one of its fixation strategies and conditional fixation time, the average steps of time that evolutionary game fixed in a specific strategy. Update rule could be in such a way that there is no possibility for any strategy that overcomes other strategies forever. In this situation, after a long run with many steps of time, the population reaches to stable condition, which means the probability that the evolutionary process is in each state approach a stationary value.
In the evolutionary process the state of population describe by frequency of each strategy In each time step, one strategy is chosen for reproduction and replaces its offspring inplace another strategy. In other words, in each step, the frequency of a strategy increases by one, and frequency of another strategy decreases by one, and the state of the evolutionary game changes. Update rule of the evolutionary game determines which strategy has a higher probability of reproduction and which strategy has a higher probability of being replaced. It is possible that the strategy that is chosen for reproduction and the strategy that vanishes be the same, in this situation, the state of the evolutionary game remain unchanged. Corresponding to each evolutionary game with l fixation strategies, there is a Markov chain with l absorption states, also, corresponding to each evolutionary game with no fixation strategy, there is an ergodic Markov chain. States in evolutionary game dynamic can be considered as Markov chain states. Transition matrix of corresponding Markov chain obtains by update rule of the evolutionary game.
Fixation probability, Fixation time, and conditional fixation time in the evolutionary game correspond to absorption probability, absorption time, and conditional absorption time in the Markov chain. Since we have the fundamental method in Markov chain theory, this duality between Markov chain dynamics and evolutionary game dynamics is so helpful to analyze the evolutionary games. In games with fixation strategies using the theorem of section, one can obtain conditional fixation time for each strategy, and in evolutionary games with no fixation strategy, the stationary probability distribution of strategies is obtained by calculating the left null vector of matrix P − I. In the next section, we use this correspondence for analyzing rock, scissor paper game.
In the real world, coexistence of many species occurs over three competing species interacting with each other like the rock-paper-scissors game. According to the anticipation of some models, the coexistence of all three competitors is possible if the interaction between them becomes local. In reference [68], the coexistence of three populations of Escherichia coli was empirically studied. According to this, coexistence is preserved when the interaction between species is localized. When dispersal and interaction are nonlocal, the diversity is lost, and one species occupies the whole population. Another example of the rock-paper-scissors evolutionary game in biology is changing in the frequency of adult side-blotched lizards. In reference [69], the authors studied the frequencies of three side-blotched lizard morph from 1990-95. According to their observations, the fitness of each morph is dependent on other morphs. They suggest an evolutionary stable strategy model which predicts each morph frequency. Estimating parameters of payoff matrix of RSP game by field data, the model predicted the morphs oscillation frequencies.
Without loss of generality, the payoff matrix of RSP game can be depicted as follow where a i , b i > 0. At first, we set the update rule so that the evolutionary process ends when the whole population occupies by one strategy. Therefore, the Markov chain is an absorbing Markov chain. By changing the update rule of the evolutionary game, we establish the possibility of mutation, which means when a strategy extincts, there is a probability that other strategies mutate to extincted strategy and it appears in the population again. In this situation, the evolutionary process never ends but after a long run, it reaches a stable position, and the corresponding Markov chain is an ergodic Markov chain.
RSP game with absorbing states
Consider a population with size N that each member of the population can be one of three types rock, scissor, and paper. We denote the three strategies rock, scissors, and paper as 1, 2 and 3, respectively. The evolutionary process runs upon a birth and death update rule. According to this update rule, one member of the population is chosen for reproduction at each step of time. The chosen member selects randomly another member of the population to be replaced with its offspring. The probabilities of selection for reproduction and being replaced for each strategy are proportional to their frequency. The expected payoff of each strategy is involved in the update rule via the Fermi distribution function [70]. The probability that in each step of time, strategy k replaced with strategy l is which f k and f l are frequency of strategies k and l respectively and F is fermi function define as
PLOS ONE
follow where β > 0 is constant. The expected payoff for strategies (π k ) can be calculate for k = 1, 2, 3 as According to Eq (7), when a strategy extincted, there is no possibility that appears again in the population, and sooner or later, the whole population occupies by one of the strategies. It means the corresponding Markov chain is an absorption Markov chain. According to Eq (6) the number of states in this Markov chain is ðNþ1ÞðNþ2Þ
PLOS ONE
strategies in the evolutionary game. When Markov chain is on the triangle's sides, it is impossible to return inside the triangle because by this specific update rule, when a strategy extinct, it never comes back. When Markov chain is on a triangle's side, it absorbs in one of two vertices side. We are interested in obtaining fixation probability and conditional fixation time for any state in the simplex. After constructing the transition matrix using Eq (7) and calculating the fundamental matrix, one can obtain the fixation probability of every state of simplex for three absorption states.
After finding the fixation probability of states, by using the theorem of section, we can obtain conditional fixation time for any state of the simplex.
To observe the footprint of the RSP game, we set the elements of the payoff matrix in the neutral case and strong selection both. In the neutral case, the elements of payoff matrix are a 1 = a 2 = a 3 = 1, b 1 = b 2 = b 3 = 2. In the strong selection case, we set the elements of the payoff matrix extremely in favor of the paper strategy and in detriment of rock strategy. In this case, we have a 1 = a 3 = 1, a 2 = 300, b 1 = b 3 = 0, b 2 = 300. Figs 2-4, show the fixation probability of paper, scissors and rock strategies respectively, when the process begins in each state in the simplex. In the neutral selection case, when the distance between the beginning state and absorption state decreases, the probability of absorption increases. After changing the payoff matrix in favor of the strategy paper, the probability of absorption to the strategy paper increased for all states inside the simplex. In this case, states with long distance to fixation state R = 0, S = 0, P = N also have a high probability of absorbing to this fixation state.
Also there are states that have high probability of absorbing to scissors strategy in neutral case, but in strong selection case, they have high probability of absorbing to paper strategy. That is because the payoff matrix changed in favor of paper strategy. Also, some states have a high probability of absorbing to rock strategy in neutral case, but in the strong selection case, they have a high probability of absorbing to scissors strategy. That is because we changed the payoff matrix to the detriment of the rock strategy. In the strong selection case, there are fewer states with a high probability of absorbing to rock strategy. Changing the payoff matrix has effects on conditional fixation time too. Figs 5-7, show the conditional fixation time of paper, scissors and rock strategy respectively, when the process begins in each state in the simplex.
Comparing conditional fixation time in the neutral and strong selection cases shows that absorption to the paper strategy happens in a shorter time in the strong selection case. As
PLOS ONE
shown in Fig 6, the states which are close to fixation state R = N, S = 0, P = 0, in the strong selection case, absorb in strategy scissors in a shorter time. Also, the conditional fixation time for absorbing in the rock strategy increases in the strong selection case for all simplex states. The reason again is changing the payoff matrix to the detriment of rock strategy. In all figures, the results from the analytical approach and simulations are compared to each other. In most of them, simulation results coincide with analytical results. Still, in Figs 6 and 7 in the part of strong selection, the similarity is not so obvious. Since in some states, the probability of absorption to rock strategy is very low in the strong selection case, we need a lot of realization of the evolutionary game to reach a limited number realization ended in rock strategy. It means simulation should repeat more times for obtaining an accurate result. The same is true for conditional fixation time of scissors strategy. The hardness of getting simulation results in some conditions shows the necessity of invent of an analytical method.
RSP game without absorbtion states
One may set the update rule in such a way that none of the strategies fix forever. In this situation, the corresponding Markov chain is an ergodic Markov chain. To compare our final result to the numerical result obtained in previous works, we use the update rule of Ref. [39]. According to this update rule, the probability that in each time step, one member of the population switches from strategy l to strategy k is proportional to T l!k = ε + W(π k − π l ) where ε is a positive value which guarantees mutation in the process and W is zero when the argument is negative. W works like the identical function when the argument is positive or zero. Elements of the transition matrix can be calculated as follow
Conclusion
This paper introduced the Markov chain method as an accurate analytical method for analyzing evolutionary game dynamics. Before this, the Makov chain method was used for studying two strategies evolutionary game or Moran process, but using the theorem explained in section, the Markov chain method can be used for any evolutionary game with any number of strategies. This method is flexible with changing the update rule of the evolutionary game. In the case of update rules which determine some fixation strategies, the fixation probability of each strategy and fixation time were calculable by the typical Markov chain method. By the theorem of section one can obtain conditional fixation time for each strategy. As an example, RSP games are evaluated with two update rules. In the first update rule, each of the three strategies can be fixed. Using the fundamental method, fixation probability and conditional fixation time of one of the strategies obtained were consistent with simulation results. In the second update rule, mutation is possible in the evolutionary game, and there is no fixed strategy. Getting the left null vector of matrix P − I leads to the limited probability distribution in agreement with simulation results. This method could also be applied to evolutionary games with more than three strategies. There is wide possibility of application of Markov chain method not only RPS game. In refrences [50,51], we used Markov chain method for evaluating the Moran process. In many situations the issue of social dilemma represented by either Prisoner's Dilemma, Chicken, or Stag Hunt games [71,72], therefore, applying this method on archetype 2 × 2 symmetric games will lead to significant results.
Appendix
In this appendix, we will prove the theorem of section The theorem is about calculating conditional absorption time in absorbing the Markov chain. It has already been proven [53] that in absorbing the Markov chain the fundamental matrix N = (I − Q) −1 is exists and can be written in an infinite series Let s i and s j be two transient states. We assume that the chain starts in state s i . Let X (k) be a random variable which equals 1 if the chain is in state j after k steps and equals 0 otherwise. Let A denote the outcome that corresponds to the absorbing of the chain to the absorbing state s a . We need to calculate PðX ðkÞ ¼ 1jAÞ to obtain conditional absorbing time, τ ia . To this end, we use the following relation for conditional probability Clearly PðAÞ ¼ r i and PðX ðkÞ ¼ 1Þ ¼ q ðkÞ ij . Now, using we arrive at The expected number of times the chain is in state s j in the first m steps given that it absorb in state s a and starts in state si is EðX ð0Þ þ X ð1Þ þ . . . : þ X ðmÞ Þ ¼ q ð0Þ ij r j r i þ q ð1Þ ij r j r i þ . . . q ðmÞ ij r j r i ð15Þ when m goes to infinity we have Using these conditional probabilities we can calculate the conditional absorbing time, τ ia , as t ia ¼ P j n 0 ij where n 0 ij ¼ P 1 k¼0 q 0ðkÞ ij . This way, one can obtain the average conditional absorption time for the processes which are eventually absorbed to each arbitrary absorbing state. | 5,892.6 | 2022-02-17T00:00:00.000 | [
"Mathematics"
] |
Theory and Implementation of Coupled Port-Hamiltonian Continuum and Lumped Parameter Models
A continuous Galerkin finite element method that allows mixed boundary conditions without the need for Lagrange multipliers or user-defined parameters is developed. A mixed coupling of Lagrange and Raviart-Thomas basis functions are used. The method is proven to have a Hamiltonian-conserving spatial discretisation and a symplectic time discretisation. The energy residual is therefore guaranteed to be bounded for general problems and exactly conserved for linear problems. The linear 2D wave equation is discretised and modelled by making use of a port-Hamiltonian framework. This model is verified against an analytic solution and shown to have standard order of convergence for the temporal and spatial discretisation. The error growth over time is shown to grow linearly for this symplectic method, which agrees with theoretical results. A modal analysis is performed which verifies that the eigenvalues of the model accurately converge to the exact eigenvalues, as the mesh is refined. The port-Hamiltonian framework allows boundary coupling with bond-graph or, more generally, lumped parameter models, therefore unifying the two fields of lumped parameter modelling and continuum modelling of Hamiltonian systems. The wave domain discretisation is shown to be equivalent to a coupling of canonical port-Hamiltonian forms. This feature allows the model to have mixed boundary conditions as well as to have mixed causality interconnections with other port-Hamiltonian models. A model of the 2D wave equation is coupled, in a monolithic manner, with a lumped parameter model of an electromechanical linear actuator. The combined model is also verified to conserve energy exactly.
Introduction
As computational power increases and the desire for ever more complex models grows, the need to have an energy-conserving mathematical framework for multiphysics, multidomain problems is becoming apparent. Many fields require complex couplings between continuum and lumped parameter models (LPMs) including physiology, aerospace, vehicle dynamics, robotics, and many more. The coupling of these models is typically done in an iterative manner, which has multiple disadvantages. Two major downsides are the failure of iterative couplings to conserve energy [35] and the difficulty in finding a stable combination of timesteps for the coupled models [45]. Another equally large disadvantage of the iterative approach is the difficulty of implementing control algorithms without a monolithic statespace matrix. A fully coupled, monolithic approach that conserves the structural properties of the lumped parameter and continuum models is desirable.
In this paper we discuss the energy conservation of a continuum system described by a partial differential equation (PDE) coupled with a LPM. Specifically, a model is derived for the linear 2D wave equation with an electromechanical (EM) linear actuator driving one of the boundaries. This example also shows that the port-Hamiltonian (PH) formalism can be used to interconnect various types of models while conserving energy flow through ports/boundaries. PH is a proven method for the modelling and control of complex multiphysics systems. Over the last 20 years, a significant amount of work has been done on infinitedimensional PH methods for the modelling of continuum systems [17,42]. Some PDEs that have been modelled using the PH framework are transmission line equations, shallow water equations and Timoshenko beam equations [14]. More recently, both 2D and 3D models have been implemented using the PH framework [38,41,44].
In Sect. 2 the spatial discretisation is proven to conserve the Hamiltonian in the same manner as the exact equations. Section 3 formulates the discretisation of the temporal domain with well known symplectic integrators: the symplectic Euler (SE) method, the symplectic (implicit) midpoint (SM) method and the Störmer-Verlet (SV) (leapfrog) method. A symplectic method conserves volume in phase space, which results in bounded conservation of the Hamiltonian [20]. The combined spatial-temporal discretisation, therefore, conserves the Hamiltonian structure of the governing equations. A similar class of methods that conserves the Hamiltonian structure is the class of multi-symplectic methods. Multiple groups have applied multi-symplectic methods to both the linear and non-linear wave equations. Reich used Runge-Kutta finite difference schemes in both space and time [34], McLachlin compared multiple methods, including spectral methods [28], and Brugnano used a Hamilton Boundary Value Method [5]. McDonald shows that multi-symplecticity preserves the travelling waves of hyperbolic equations [27]. The Partitioned Finite Element Method (PFEM) in the paper by Cardoso-Ribeiro et al. [11] combines a finite element spatial discretisation with an SV time integration scheme in a way that conserves energy but requires Lagrange multipliers to implement mixed boundary conditions [21]. Brugnoli et al. successfully apply this approach to Mindlin and Kirchhoff plate models [7,8]. Brugnoli et al. [9] have also introduced another method for applying mixed boundary conditions to PFEM. Their method requires discretisation of the spatial domain into separate sections, each section having one type of boundary condition. The Stokes-Dirac structure [43] and interconnection property of PH methods is then used to combine the multiple sections, creating a full system with mixed boundary conditions. Similar to our method, Kotyczka uses a finite element discretisation and symplectic time integration in a way that conserves the Hamiltonian structure and allows for mixed boundary conditions [22], however, user-defined parameters in the method must be tuned for accurate results. The method introduced in this paper combines desirable attributes of Cardoso-Ribeiro's, Brugnoli's, and Kotyczka's methods and provides a Hamiltonian-conserving, symplectic method that allows for easily implemented mixed boundary conditions, port-based boundary coupling, and does not require tuning of userdefined parameters.
To ensure conservation of energy flow through the boundaries, a weak boundary condition implementation is used for the Dirichlet conditions, similar to the way Neumann conditions are typically implemented in finite element methods. Weak boundary conditions are implemented in the variational form and provide many benefits for a finite element formulation. One of these benefits is that it simplifies the implementation by not having to directly prescribe the degrees of freedom (DOFs) at the boundary. This can have benefits in applying multiple types of boundary conditions, including no-slip conditions for the Navier-Stokes equations [3]. Also, no manipulation of the solution matrix is required to prescribe the DOFs. The ability to have mixed boundary conditions is also extended to allow mixed causality interconnections with other PH models. By showing that our spatial discretisation is equivalent to a coupling of canonical PH models with either Neumann or Dirichlet boundaries and by following the interconnection methods of Brugnoli [6], we calculate the causal boundary connections with a LPM. The ability to use the canonical forms to calculate the power conserving interconnection allows mixed causality boundary interconnection between other PH models.
Continuous Galerkin and hybridizable discontinuous Galerkin methods couple well with PH methods due to the ease of ensuring structural properties of the models and the simplicity of coupling different models at the boundary. Brezzi and Fortin give a good overview of Galerkin methods [4]. Cockburn developed a unifying framework for Galerkin methods [12] that multiple authors have extended. McLachlan extends Cockburn's discontinuous Galerkin (DG) methods into a formulation that allows proof of multi-symplecticity for elliptic equations [29]. Sanchez uses a hybridizable discontinuous Galerkin approach to obtain a wave equation discretisation similar to our method, that is proven to be Hamiltonian-conserving and symplectic in time [36]. We prove the same qualities for our method, a continuous Galerkin approach that has the important added novelty of being able to be coupled with arbitrary PH LPMs. The conservativity and modular approach of this method is thus ideal for a wide range of real-world problems.
Energy conservation in the Galerkin method is extremely important for real-world applications that require long-time simulations, such as in geophysical fluid dynamics. Bauer uses a Poisson bracket approach to prove conservation of energy for a Galerkin discretisation of the rotating shallow water equations [2]. Extending on Bauer's work, Eldred uses the Galerkin method coupled with a Poisson time integrator to conserve energy when modelling the thermal shallow water equations [15]. Both of these models take advantage of Hamiltonian-conservation to give good long-time prediction for geophysical fluid dynamics applications.
Modelling of wave propagation also has applications in multiple biological fields. In this paper, the wave equation models homogeneous, linear-elastic media. For extensions with heterogeneous materials, typical of most biological materials, see the paper by Serhani [38]. Elastography [31] is an interesting application that requires a type of inverse modelling to identify the elasticity of heart tissue. A coupled LPM-continuum model could improve the current elastography methods. There has also been work done on non-linear models for wave propagation through biological tissue [13].
The model in this paper is implemented with the software FEniCS [1], which is a tool for automated scientific computing that focuses on solving PDEs. FEniCS allows for a very efficient implementation of finite element methods specified in a weak form. Another strong attribute of FEniCS is its automatic differentiation, which is valuable for inverse problems and for control.
The section outline of the paper is as follows. Section 2 details the PH form of the wave equation, proves that the discretisation conserves energy, and proves that the discretisation retains the Stokes-Dirac structure. The FEniCS implementation and the results showing energy conservation are shown in Sect. 3. Section 4 validates the wave equation against an analytical solution by showing spatial, temporal, and eigenvalue convergence. An EM model is introduced in Sect. 5. In Sect. 6 the wave equation is coupled with the EM model in a monolithic approach that conserves the canonical PH structure. Finally, in Sect. 7, the results of the combined model are shown and the energy conservation of the model is discussed. Appendices A and C detail the Python code for Sects. 3 and 6, respectively.
The Wave Equation
This section details the weak form PH discretisation of the wave equation with constant propagation speed, the conservation of energy proof, and the proof that the discretisation ensures a Stokes-Dirac structure. The 2D wave equation in Cartesian coordinates is used throughout this paper, however, all results in this section naturally extend to the 3D wave equation. The basic linear wave equation is where w is the wave amplitude, x denotes the spatial coordinates, and t is the time. To model the simplified wave propagation in an elastic membrane we define c 2 = k w /ρ w as the wave speed squared, which is a constant function of the material density (ρ w ) and stiffness (k w ). To transform Equation (1) into PH form, state variables,p, the momentum, andq, the strain, are chosen asp This transforms the second-order equation into a system of (n + 1) first-order equations, where n is the number of spatial dimensions. Note that the tilde overscript is used to denote exact variables, to distinguish them from the approximate functions used in subsequent sections. Using the PH notation of flow and effort variables, the time derivative of the state variables,f p andf q , are defined as flows The Hamiltonian functional and the Hamiltonian density are respectively given by where is an open, bounded spatial domain with a Lipschitz-continuous boundary, ∂ . The effort variablesẽ p , the velocity, andẽ q , the stress, are defined as the variational derivatives of the Hamiltonian density, For a functional that only depends on its states, not on their spatial derivatives, the variational derivatives are equal to the partial derivatives of the integrand. Transforming Equations (1) and (3) to (6) into a PH structure gives where the div and grad operators make up the formally skew-adjoint J operator. For a proof of the skew-adjointness of J see the work by Trenchant et al. [40].
Theorem 2.0.1 Equation (7) is energy-conserving. i.e., the rate of change of the Hamiltonian is equal to the energy flow through the domain boundary.
Equation (8) satisfies the well known bond-graph and PH condition, that the product of the effort and flow variables equals the power [42]. Substituting Equation (7) into Equation (8) and using integration by parts giveṡ Therefore, the rate of change of the Hamiltonian is only dependent on the boundary terms and thus the Hamiltonian is conserved within the internal domain.
respectively, or for the opposite causality the inputs and outputs are respectively,
Weak Form
In this section, the discretised weak form of the wave equation is derived. First, we define the L 2 inner product over the domain, , and the boundary, ∂ , as Using the same function spaces as in Cardoso-Ribeiro's work [11], approximate flow and effort functions are introduced, f p , e p ∈ H 1 ( ) , f q , e q ∈ H div ( ) . (12) In the succeeding sections, we show that this choice of function spaces allows us to combine important features of Cardoso-Ribeiro's and Kotyczka's [22] methods. Cardoso-Ribeiro's methods ease of implementation is combined with Kotyczka's methods ability to implement mixed boundary conditions without Lagrange multipliers. Substituting these approximate functions for the exact flows and efforts in Equation (7) and taking the inner product with the test functions The right-hand side of Equations (13a) and (13b) are then integrated by parts to give These equations are now in a form where the Galerkin method can be applied. To do this, the approximate flow and effort functions in Equation (12) are defined from the discrete flow (f p ∈ R Np ,f q ∈ R Nq ) and effort (ê p ∈ R Np ,ê q ∈ R Nq ) vectors and the vectors of globally defined basis functions (ϕ p , ϕ q ) as shown here, where N p and N q are the number of DOFs stored by the discrete vectors. A hat over a variable is used to denote the vector of discrete values, i.e.,ê p is the column vector of discrete DOF values for the scalar field e p . To make the method Galerkin, v p and v q are discretised with the same basis functions as f p and f q , respectively. The basis function families that we use are Lagrange for ϕ p and Raviart-Thomas [33] for ϕ q , however, any basis functions that satisfy the function spaces in Equation (12) are suitable. In Equations (15) and (16), the sizes of ϕ p and ϕ q are N p × 1 and N q × 2, respectively. The notation, ·, · is used for the standard inner product on R, as opposed to ·|· , the L 2 inner product. For the lowest order Lagrange and Raviart-Thomas elements,ê p andf p are stored at nodes andê q andf q are stored at edges. For details on higher order elements see the FEniCS book [26]. Substituting the approximate functions from Equations (15) and (16) as well as the corresponding test functions into Equation (14a) gives where ψ p = ϕ p | ∂ represents ϕ p evaluated at the boundary and ψ q = ϕ q · n| ∂ represents ϕ q evaluated in the normal direction at the boundary. More clearly, the basis functions for the boundary terms satisfy the following relations, Substituting the approximate functions from Equations (15), (16) and (18), as well as the corresponding test functions into Equation (14b) gives The matrices in Equations (17) and (19) are given as where the inner product acts elementwise for the matrix as Applying a typical weak form approach, Equations (17) and (19) must hold for anyv p ∈ R Np andv q ∈ R Nq . Therefore, according to the fundamental theorem of variational calculus, the following matrix system of equations holds, M pf p = K pêq + L pêq , When implementing Equation (22), it is essential to include the discrete dynamic and constitutive laws which coincide with Equations (3) and (6), respectively. The dynamic and constitutive laws are respectively, where I is the identity matrix. Also,p andq are the discrete vectors of momentum DOFs and strain DOFs, respectively. This formulation assumes constant material properties and therefore a constant Q matrix. Finally, substituting Equations (23) and (24) into Equation (22) we get the matrix system of equations, In Sect. 2.3 we prove that Equation (25) can be written in two different (one for Dirichlet and one for Neumann boundary conditions) canonical PH forms and that it represents a non-degenerate Stokes-Dirac structure.
Discrete Conservation of Power Proof
To proceed with a conservation of power proof we first define the approximate Hamiltonian, H , and Hamiltonian density,Ĥ in the same way as the exact functions that were defined in Equations (4) and (5), The variables p and q have the same basis functions as e p and e q respectively. Substituting in the discrete vectors,p andq and their corresponding basis functions giveŝ The relationship between the approximate state variable functions and the approximate effort functions from Equation (16) can be found by taking the partial derivative ofĤ(p, q) with respect to p and q.
The corresponding relation between discrete effort variables and state vectors iŝ To retain the structure of the continuous system, the discretised equations must have the same energy-conserving structure as the continuous equations. Therefore, the rate of change of the Hamiltonian must only depend on the boundary variables, as in Theorem 2.0.1. The following conservation of energy proof is influenced by the thesis of Kotyczka [22] and the paper by Cardoso-Ribeiro [11]. To prove Theorem 2.2.1 a mapping between general variables and boundary variables must be formulated. Following the work of [22], we decompose L p into T p , S q and L q into T q , S p , as shown, The matrix T p is simply a mapping from allê p DOFs of the mesh to the DOFs that havê e p defined at the boundary. Similarly, T q is a mapping from allê q DOFs of the mesh to the DOFs that haveê q defined at the boundary. Both T p and T q are made of zeroes and ones and are semi-orthogonal, therefore T p T T p = I Np and T q T q T = I Nq . For first order Lagrange and Raviart-Thomas elements, the identity matrix I Np has a number of rows and columns equal to the number of nodes (N p ) and I Nq has a number of rows and columns equal to the number of edges (N q ). Calculation of S p and S q is done trivially by using the semi-orthogonality of T p/q and Equation (30), giving S q = T p L p and S p = T q L q . The matrix S p is a mapping from momentum effortsê p to boundary strain flowsf qb . Similarly, S q is a mapping from stress effortsê q to boundary velocity flowsf pb . The matrix mappings of T p/q and S p/q are summarised asê pb = T pêp ,ê qb = T qêq , To prove Theorem 2.2.1 we need a small lemma.
Lemma 2.2.2 (Lemma 1 [24])
Applying integration by parts in reverse then gives therefore, Proof of Theorem 2.2. 1 We start by taking the time differential of the Hamiltonian in the usual way.Ḣ The approximate efforts and flows from Equations (15) and (16) are then substituted for the effort and flow functions to give the discrete Hamiltonian rate of changė Equation (22) is then used to givė Lemma 2.2.2 can now be used. Following that, the definitions of the boundary flows and efforts in Equation (31) can be used to complete the first half of the proof, and identically for the second half of the proof,
Corollary 2.2.3
The rate of change of the Hamiltonian of the discrete system conserves energy in the same way as in Theorem 2.0.1, for the continuous system.
Proof
Beginning with Equation (33) and substituting in L p from Equation (20) giveṡ Substituting Equation (18) giveṡ which confirms that the rate of change of the discrete system Hamiltonian is a function of the efforts at the boundary. This correctly coincides with Theorem 2.0.1.
The approximate system inputs and outputs are thus defined in the same way as Equations (9) and (10), either or
Stokes-Dirac Structure and the Canonical Port-Hamiltonian Form
In Equation (22) we introduce the model in a non-canonical PH form, however, in this section we prove that Equation (22) is equivalent to a structure-preserving coupling of canonical Port-Hamiltonian forms. This also proves that the discretisation ensures a Stokes-Dirac structure [43], therefore, conserving the structure of the continuous equations, Equation (7). To formulate the system in an input-state-output PH form, we first define the input and output functions at the boundary corresponding to Equations (34) and (35) in terms of their discrete vectors (û q ,ŷ p ,û p ,ŷ q ) and basis functions (θ q , θ p ).
where θ p and θ q contain the entries of ψ p and ψ q corresponding to the boundary DOFs. The first canonical PH form for Neumann boundary conditions can be set up by using which was formulated in Equation (32). Inserting the transpose of Equation (38) into Equation (22) gives We then multiply the output equation in Equation (34) by the v yq = v yq |θ q ∂ N trial function, where a subscript N denotes a Neumann boundary. Following this, we integrate over the boundary then use the fundamental theorem of variational calculus to get, where M yp = θ q |θ p ∂ N . Also, B qb is defined as, Combining Equations (39) to (41) gives the PH canonical form, ⎡ ⎢ ⎣ Including the dynamic and constitutive laws from Equations (23) and (24) gives the input- where M qb is the mass matrix, J qb is the skew-symmetric matrix, and Q qb is the constitutive law matrix. The infinite-dimensional canonical form corresponding to the discretised canonical form in Equation (42) is where, M qb is the mass operator and J qb is the skew-symmetric operator. Technically, Equations (43) and (44) are canonical forms for domains with Neumann conditions only. In the following we formulate the canonical form for Dirichlet conditions. We take the transpose of Equation (38) and use the fact that L q = L T p to give Equation (45) can be substituted into Equation (22) and the discretisation of y q can be done in the same way as for y p in Equation (40) to give the Dirichlet boundary condition equivalent of Equation (42). This discrete canonical form is ⎡ and the input-state-output PH form is where M yq = θ p |θ q ∂ D and a subscript D denotes a Dirichlet boundary. The canonical PH form in Equation (47) has a mass matrix, M pb , a skew-symmetric matrix, J pb , and a constitutive law matrix, Q pb . The interconnection matrix, B pb is defined as The infinite-dimensional canonical form corresponding to Equation (46) is where, M pb is the mass operator and J pb is the skew-symmetric operator.
Equations (47) and (49) are only canonical PH forms for boundaries with Dirichlet conditions. For a domain with mixed boundaries, our system in Equation (25) needs to be equivalent to a canonical PH system, or in our case a combination of canonical PH systems. Any closed domain with mixed Neumann and Dirichlet boundary conditions can be subdivided into subdomains with only Neumann or only Dirichlet boundary conditions. This idea has been taken advantage of in Brugnoli et al.'s work [9], where the authors numerically segment the domain and apply PFEM to each section. Here the idea is only used conceptually to prove that our system in Equation (25) is equivalent to a combination of canonical input-state-output PH formulations and therefore, by the compositionality property, retains the Stokes-Dirac structure of the analytic equations, Equation (7). This means that mixed boundary conditions can be implemented in the weak form, as detailed in Sect. 3, and that the resulting system is a Stokes-Dirac structure. It is also important to note that our formulation is non-degenerate, due to M p and M q being full rank. The matrices turn out to be full rank because we use the same basis functions for effort and flow functions. Proposition 1 of Kotyczka's paper [24], uses a similar compositionality argument to allow mixed boundary conditions and mixed causality at the boundaries. However, in Kotyczka's work, the nonfull-rank M p and M q matrices cause the requirement of user-defined parameters in-order to form a non-degenerate Stokes-Dirac structure. Proof First we subdivide the domain so that each connected Neumann boundary is in a domain denoted Ni that does not connect or overlap with any Nj (j = i) and is not connected to any other external boundaries. A simple example of a subdivision is shown in Fig. 1. Each of the i subdomains has inputs and outputs split up into u qi , y pi at the external boundary, and u int qi , y int pi at the internal boundary. The remainder of the domain is therefore connected and only has Dirichlet boundary conditions, we denote this subdomain D .
The inputs and outputs of D are split up into u pk , y qk at the external boundaries and u int pi , y int qi at the internal boundaries that border Ni . The canonical form of Equation (44) is used in the Ni domains and Equation (49) is used in the D domain. Note that the canonical forms are modified accordingly, to split the inputs and outputs into internal and external parts. The causal interconnection relations [6,24] at each internal boundary can then be written as This ensures conservation of energy flow between the subdomains due to the power conserving inner product, Therefore, since each subdomain has a PH Stokes-Dirac structure, the power conserving interconnection ensures the total system is also a Stokes-Dirac structure by compositionality. Also, each subdomain has the correct canonical form for its Neumann or Dirichlet boundary conditions. Lastly, any connected domain is equivalent to a decomposition in the way we have described. The combination of these results proves that Equation (25) is equivalent to a structure-preserving combination of canonical PH forms for any connected domain with mixed boundary conditions.
As will be seen in Sect. 6 the infinite-dimensional canonical form in Equation (49) can be used to determine the power conserving interconnection between other canonical PH systems, where the interconnection enforces a Dirichlet condition on the wave domain boundary. Similarly, the infinite-dimensional canonical form of Equation (44) can be used to find the power conserving interconnection for connections that assign a Neumann condition on the wave domain boundary. Due to the conclusion of Theorem 2.3.1, that any domain is equivalent to a subdivision of subdomains with either the canonical form of Equation (49) or Equation (44), both Dirichlet and Neumann interconnections can be implemented on a domain. This means that the system developed in this paper can have mixed boundary conditions as well as mixed causality interconnections between other PH systems.
Wave Equation FEniCS Implementation
In this section, the FEniCS implementation of Equations (14a) and (14b) on a rectangle domain is detailed. Appendix A supplements this section by detailing the Python code for the implementation. A schematic of the wave domain is shown in Fig. 2.
An unstructured, triangular mesh was created over the domain with FEniCS meshing software. The boundary conditions on the domain are set as where the inputs in Equations (34) and (35) for the left, right, and middle boundaries are defined respectively as Dirichlet conditions are applied to both the left and the right boundaries. The left boundary is set as an input condition whereas the right boundary is given a fixed zero value. The top and bottom boundaries have a zero-flux Neumann condition applied. Separating the boundary terms in Equations (14a) and (14b) for each boundary condition and reverting to integral rather than inner product notation gives In Equations (54a) and (54b), the terms inside the boundary integrals are evaluated at their respective boundary. Substituting the state variables f p = −ṗ, f q = −q, e p = p/ρ w , e q = k w q and the boundary terms from Equations (52a)-(52c) gives These equations can be implemented in FEniCS, which automatically generates a matrix system of equations in the form of Equation (25). However, to solve these equations the system must also be discretised in time. Symplectic time integration schemes conserve the symplectic structure of the continuous equations and approximately conserve the Hamiltonian, therefore, they are the natural choice for the temporal discretisation. The symplectic Euler (SE) time integration scheme is applied to Equations (55a) and (55b) to give where the m superscript denotes the variable at the previous time step. The L, R, and M subscripts denote variables at the left, right, and middle boundaries, respectively. The SE scheme combines an explicit step for Equation (56a) and an implicit step for Equation (56b). When the Hamiltonian is separable the SE scheme is semi-explicit, meaning Equation (56b) could be solved explicitly after the solution of Equation (56a). However, due to the ease of implementation in FEniCS, the equations are solved in one implicit step. By Theorem 2.2.1, Equations (55a) and (55b) conserve the Hamiltonian. Combining this with SE integration gives a discrete system that retains the Hamiltonian structure of the continuum equations and conserves energy for large times, as further discussed in Sect. 3.1.
The energy bound on symplectic methods is, in general, proportional to O(( t) r ), where r is the order of the time integration scheme [20]. To compare the energy bound between different order methods, a second-order symplectic scheme, the Störmer-Verlet (SV) method [19] is also implemented, Solving Equations (57a)-(57c) requires two system of equation solves per time step, Equation (57a) is solved to get q 1/2 then Equations (57b) and (57c) are solved for q and p. Although in the general case the energy residual is bounded for symplectic methods, since the example we are modelling is linear we can improve this result and get exact energy conservation. We do this with the symplectic midpoint (SM) method, which conserves all quadratic invariants for linear systems [20]. The variational form of the SM scheme is
Wave Results
In this section, the model that is detailed in Sect. 3 and proven to conserve energy by Theorem 2.2.1 has been implemented with a range of different time integration schemes. Our method, which has a weak Dirichlet boundary condition implementation is compared to an implementation with Dirichlet boundary conditions implemented in the typical strong form. This is done to show that the naive setting of boundary conditions in a strong manner is detrimental to energy conservation. The strong Dirichlet implementation modifies the matrix system of equations generated by FEniCS to directly enforce the boundary condition values. This differs from our weak boundary Dirichlet implementation, where we apply the boundary conditions by specifying the boundary integral terms in Equations (55a) and (55b). A detailed example of the different boundary condition implementations in FEniCS is shown in Appendix A.
For the wave equation in PH form, setting p at the boundary is a Dirichlet condition, equivalent to setting ρ w e p , and setting (q · n) is a Neumann condition, equivalent to setting 1 kw (e q · n). The reason that we can implement Dirichlet and Neumann conditions in the weak form is because we integrate both Equations (13a) and (13b) by parts, giving boundary terms for Dirichlet and Neumann conditions in Equations (55a) and (55b). This approach differs from the PFEM of Cardoso-Ribeiro [10], where integration by parts is only used on a subset of the governing equations. Our approach thus has the advantage of allowing mixed boundary conditions without the need for Lagrange multipliers. Therefore, our method results in a matrix system of equations that can be solved as an ordinary differential equation (ODE), rather than a differential algebraic equation (DAE), which is easier to solve in general. All methods in this section use a time step of t = 5 × 10 −4 . Figure 3 shows the resulting Hamiltonian,Ĥ , for the input in Equations (52a)-(52c). The Hamiltonian is expected to be constant after 0.25 s, because the input boundary condition is set to zero, therefore, no energy flows into or out of the domain.
In Fig. 3(a) we show that the implicit Euler (IE) scheme incorrectly dissipates energy and the explicit Euler (EE) scheme has a fictitious increase in energy, resulting in instability. At t ≈ 0.9 s the SE integration schemes for strong and weak boundary conditions show an undesirable non-constant Hamiltonian. A zoomed-in image of the non-constant behaviour is also shown in Fig. 3(b). The bump at t ≈ 0.9 s is likely due to there being high-order dynamics when the wavefront approaches the right boundary that are not accounted for in the first-order SE scheme. To remedy this situation, three second-order integration schemes are implemented, Explicit Heun's (EH) (also called improved Euler [39]), SV and SM. All of these methods drastically decrease the bump at t ≈ 0.9 s. It should be noted that decreasing the time step of the first-order methods also has the same effect of decreasing the bump (not shown). Interestingly, EH removes the bump completely, however, SV does not. SM also removes the bump completely because the SM scheme conserves quadratic invariants exactly for linear systems [20]. Figure 3(b) shows that for SE and SV the Hamiltonian oscillates about a conserved average Hamiltonian. This plot also shows that SV has an approximately constant Hamiltonian (apart from the aforementioned bump at t ≈ 0.9 s), whereas EH has a gradually increasing Hamiltonian, indicating that energy is not conserved for long times. For SV, however, the Hamiltonian oscillates about a conserved value and therefore, is conserved for long times. The bound on the oscillations also converges towards the conserved value as the time step is decreased [20]. The energy bound for SE and SV is likely higher at the bump due to the reflecting boundary. This would hint that the non-perfect energy conservation of SE and SV is heavily influenced by the boundary and not so much by the internal domain. The SM One problem of the first-order SE weak method in Fig. 3 is its oscillatory behaviour. These oscillations may be due to applying an essential boundary condition (EBC) weakly, without a penalty method. Typically EBC's are applied with a penalty method such as Nitsche [25,30], however, according to Scovazzi, penalty methods are not required due to the hyperbolic nature of the wave equation [37]. This could mean that the oscillations may simply be the expected oscillations of low order symplectic time integration schemes. As anticipated by the proof of boundedness in [18], the oscillations are reduced when the time step is decreased (not shown) or the order of the symplectic method is increased, as seen with the SV method.
To compare the energy conservation of weak and strong boundary condition implementations, we define the energy residual at time t f as whereĤ , which is equal to the internal energy in the domain at time t f , is calculated from Equation (27) withq andp variables at time t f . Kotyczka et al. [23] shows that implicit Gauss-Legendre schemes such as SM conserve the discrete energy exactly for linear PH systems. However, for schemes such as SV and SE that do not conserve energy perfectly, we expect an energy error from both the non-conservativity within the domain and a nonexact energy transferred through the boundary. Therefore, the second term in Equation (59) is simply the energy that we expect to have transferred through the left boundary out of the domain (the term is negative for flow into the domain) between the initial time t 0 and time t f . Since we want to calculate the numerical energy flow out of the boundary, care must be taken to evaluate the output energy calculation in the same way as the numerical time integration scheme. This ensures that there is no discrepancy between the accuracy in which the two terms in Equation (59) are calculated. To ensure this consistent numerical accuracy, variables with a overscript are the effort or state variables evaluated at time steps that correspond with the chosen time integration scheme. For example, the expected energy contribution of the SV scheme at the boundary can be calculated by decomposing the SV method, as shown in a concise form here, into two adjoint SE method steps with half timestep each [20], this gives where F () denotes a function of the entries in the bracket. Then, by knowing that the superscripts of q and p L are the same as the superscripts of q and p in Equation (61a) for the first half time step, and the same as in Equation (61b) for the second half time step. We can calculate the residual in two half time steps, as follows from Equation (59), The energy residual in Equation (59) is plotted for multiple time integration schemes in Fig. 4.
It is clear from the bounded energy residual of the SE and SV schemes and the exactly conserved energy of the SM scheme in Fig. 4 that the weak boundary condition implementation conserves energy for long times. This agrees with our proof of Theorem 2.2.1 and the expected bounded energy residual of symplectic time integrations schemes. Again, for the SE and SV schemes, the energy is conserved in an average sense and oscillations about the conserved energy do occur. As expected, the SM scheme conserves energy exactly, with an energy residual of < 10 −12 , which is round-off error. The strong implementation of the input Dirichlet boundary condition does not conserve energy. More exactly, the strong implementation's energy residual is dependent on the refinement of the spatial mesh. To display this effect, the energy residual of the wave equation with SV time integration is analysed for varying element characteristic length ( √ Mean Element Area) and is shown in Fig. 5. A quadratic trend is plotted to show that the energy residual when using a strong boundary condition implementation has a quadratic dependence on the element characteristic length. This differs from the weak boundary condition implementation, which conserves energy independently of the mesh refinement. This agrees with our proof of Theorem 2.2.1, i.e., that our spatial discretisation is perfectly energy-conserving. Since the energy error is bounded for symplectic time integration schemes [19], the energy error for the weak boundary implementation is only dependent on the step size of the symplectic time integration scheme.
Temporal integration schemes cannot, in general, conserve both the exact energy and the symplectic structure of the system [20]. However, a general result that applies to nonlinear systems, as well as our linear system, is that conserving the symplectic structure results in a bounded energy error which decreases as the time step is reduced. In subsequent chapters, we focus on the results of the SV scheme rather than the SM scheme to show energy conservation results that resemble what we expect for general non-linear problems.
Wave Equation Comparison with Analytical Solution
In this section, our numerical model with a weak boundary condition implementation is compared against an analytical solution to ensure stable spatial and temporal convergence [32]. The initial and boundary conditions are given as q(x, L y , t) · n = 0 .
The analytical solution is then given by where all constants are given in Appendix D.
Spatial Convergence
The error between the numerical model and the analytic solution for a range of characteristic element lengths is evaluated to show the spatial convergence of the model. A table of the number of elements, with corresponding error and convergence details for each refinement level, is shown in Appendix E. The SV time integration scheme was used with a time step of 5 × 10 −4 s. The L 2 error norm for each step n is defined as where p a (t n ) is the exact solution evaluated at the step n. This error is calculated accurately with the 'errornorm' function in FEniCS. The maximum L 2 error norm over 1.5 s of simulation is plotted in Fig. 6 for each characteristic element length. In the figure legend, the numbers following P and RT denote the order of the Lagrange and Raviart-Thomas elements, respectively, i.e., P1RT2 uses first order Lagrange elements and second order Raviart-Thomas elements. As can be seen, the L 2 error norm for all element order combinations shows standard convergence against characteristic element length. Here we define standard spatial convergence as convergence of order O(( x c ) k+1 ), where k is the order of the methods lowest order basis function and x c is the characteristic element length.
Modal Analysis
To ensure the correct handling of mixed boundary conditions, the eigenvalues of the model were verified to be accurate by performing a modal analysis. The analytic eigenvalues were calculated by separation of variables of Equation (1) into This method can simply be shown to give the following three first order eigenvalue problems, where λ and μ are eigenvalues of the first order spatial systems and ω 2 = c 2 (μ + λ) is the eigenvalue or the squared eigenfrequency that we want to predict. For the boundary conditions in Equations (63a)-(63e), the real components of the eigenvalues in Equations (67b) and (67c) are zero and the complex components are Therefore, from Equation (67a), the analytic eigenfrequencies we wish to find are given by To predict the eigenfrequencies of our model, Equations (55a) and (55b) were discretised in FEniCS, creating the following system of equations, where, M e , K e , and L e are the mass, stiffness, and boundary matrices which determine the eigenfrequencies of the system. The M −1 e (K e + L e ) matrix was then input into NumPy's eigensolver to calculate the eigenfrequencies of the system. A plot of the complex part of the first 50 analytic and modelled eigenfrequency pairs for a mesh with 1322 elements is shown in Fig. 7. The real parts of all eigenfrequencies equal zero, as expected for the wave equation with no damping.
As shown in Fig. 7, the eigenfrequencies are predicted accurately with no spurious modes. A plot of the convergence of the eigenfrequencies with respect to the characteristic element length is shown in Fig. 8. This shows standard quadratic convergence for first order Lagrange and Raviart-Thomas elements.
Time Convergence
The numerical error for this model is heavily dominated by the spatial discretisation. Therefore, to show time convergence, third order Lagrange and Raviart-Thomas elements are used. This ensures that the initial spatial error is small, therefore, the error that propagates through the domain is due to the temporal discretisation. A convergence study is done for a 1.5 s simulation and an error growth study is assessed for long-times (t = 10 s). The long-time analysis shows the rate at which the error grows through time and displays our models effectiveness for long-time simulations. Although the rate of error growth for different symplectic schemes is well known [18], this section importantly shows that our spatial discretisation does not deteriorate the expected error growth rate.
The SE and SV time integration schemes have been implemented to show the temporal convergence of our method. Linear and quadratic convergence is shown for the SE and SV schemes, respectively in Fig. 9, where the maximum L 2 error over the 1.5 s simulation is plotted for a range of time steps. This shows that our method gives standard temporal convergence, where we define standard temporal convergence as convergence of order O(( t) r ), where r is the order of the time integration scheme. These simulations were performed on a spatial discretisation with 9358 elements.
The order at which the state variable error of a method grows is a common metric for the accuracy of symplectic and multi-symplectic methods, as assessed extensively in Hairer's book [20]. Hairer shows that symplectic methods have state variable error growth of order O(t ( t) r ), where r is the order of the time integration scheme. The time step convergence in Fig. 9 for fixed final time t = 1.5 s confirms that the error converges with order O(( t) r ) Therefore, observing an error growth proportional to t when simulated for long times is sufficient to show the correct order of error growth, O(t ( t) r ). The L 2 error norm for a 10 s simulation of the wave equation is shown in Fig. 10 for various time integration schemes. To give a fair comparison, the schemes have time steps that result in the same number of function evaluations, 0.00025 s for IE and SM and 0.0005 s for SV. To display a result that does not blow up immediately, the time step for EH is decreased even further to 0.000125 s.
In Fig. 10 the EH method blows up with exponential error growth because it does not have a bounded energy residual. This behaviour is typical of fully explicit methods, which can be unstable for long times [19]. The IE method has large error growth due to its inherent energy dissipation. This error increase tapers off (not shown) due to the complete loss of energy in the numerical model, this results in p n approaching a constant zero throughout the domain. The symplectic methods both show an error that is linearly dependent on time, as required to validate that the error growth is proportional to O(t ( t) r ). The high-frequency oscillations of the L 2 error norm in Fig. 10 are due to p n varying from being zero throughout the domain to having a maximum, as shown in Fig. 11. Therefore, the oscillation frequency is the frequency that p oscillates from maximum/minimum to zero.
Electromechanical Lumped Parameter Model
This section details a simple LPM for a linear actuated electric motor. The system diagram and bond-graph schematic are shown in Fig. 12, with constants defined in Appendix D. The current of the electrical system is i and the displacement of the linear motor is s. The bondgraph methodology is a modular approach for LPMs that ensure conservation of energy within and between models, for a review see the work by Gawthrop [16]. The PH framework extends from bond-graphs to also allow continuum models that conserve energy.
The Hamiltonian for this system is where the canonical momentum for the mechanical subsystem is given by h M = mṡ and the electrical system equivalent of the canonical momentum is the magnetic flux linkage, Fig. 12 (a) System diagram and (b) bond-graph diagram for an electromechanical system, where Se, R, I, C, and GY denote effort sources, dissipative components, inductive/mass storage components, capacitive/spring storage components, and gyrator components respectively denoted by h E = L E i. The canonical PH form for this system is where B u is the input control matrix with corresponding input, u and output, y u . Also, B F is the boundary port matrix with corresponding boundary force F b and boundary output y F . Including the constitutive laws and evaluating the B u and B F matrices gives In this paper, the resistances, R E and R M , are set to zero, as a non-dissipative system is required to display a conserved Hamiltonian. The values of all constants used in the simulations are given in Appendix D. From Sect. 6 onwards F b will be the reaction force from the coupled wave equation. The first part of Equation (73) can be written as a typical linear system of ODE's with a state vector y = [h E , h M , s] T and a control/interconnection vector u = [u, F b , 0] T , asẏ To create a monolithic coupling of this LPM with the wave equation from Sect. 3 the LPM needs to be implemented in FEniCS. The ODE in Equation (74) can be implemented in FEniCS by using the 'real function space', which assumes a function has one value over a domain, i.e., it has no spatial dependence. This makes the domain that the equations are implemented on irrelevant to the ODE. Equation (74) can then be implemented by multiplying by a trial function, v y , and taking the trace against the border of an arbitrary domain. To allow easier coupling in the following section the trace is taken against the left boundary of the wave domain. The remainder of this section details the implementation of the LPM in FEniCS. Although in the following sections we use an SV time integration scheme, the SE scheme is detailed here to provide easier understanding to the reader. For the implementation of the SV method see the GitHub repo. https://github.com/FinbarArgus/portHamiltonian_FEM.git For the SE scheme, the ODE can be implemented by solving the matrix systems of equations generated by at each time step. This method of implementing a LPM by assigning the variables as real functions over the whole domain is not optimally efficient and thus, future work should look at developing a method specifically for LPMs that is compatible with FEniCS. y m is a vector of state variables at the previous time step and y a is a vector of state variables at a combination of current and previous time steps, as shown here, Once real function spaces are created and the vectors in Equation (76) are formed, the variational form in Equation (75) can be expressed and solved with the Python code in Appendix B.
Coupling with the Electromechanical Model
In this section, the wave equation from Sect. 3 is coupled with the EM PH model from Sect. 5. A schematic of the combined model is shown in Fig. 13 for a rectangle wave domain.
Following the previous sections, the boundaries denoted ∂ M and ∂ R have zero Neumann and zero Dirichlet conditions, respectively. The ∂ L boundary is the Dirichlet boundary connection with the LPM model. All boundary conditions are implemented in a weak manner. Since the only boundary connection is a Dirichlet boundary, the canonical form of Equation (49) can be used to set up the interconnection with Equation (73). The relationship between inputs and outputs of the wave domain and EM domain can be written as where W is a compact operator and W * is its adjoint operator. The duality pairings for the inputs and outputs are where the ·|· ∂ L duality product is an L 2 inner product that acts over the connection boundary and the ·, · inner product is the standard inner product in R. We know that the velocity of the left wave boundary is directly set by the output velocity of the EM system, therefore, where w transforms a scalar into a constant function over the boundary, with the value of y F . Substituting the boundary output relation from Equation (73) gives In practice, to turn the scalar value into a function it must be turned into a constant vector and then the vector must be dot producted with a vector of basis functions. Knowing that u p = θ T pû p we get, Now, we enforce that every point on the left boundary of the wave domain has the same vertical velocity (ê p | ∂ L ) as the vertical velocity of the output motor ( h M m ). Therefore,û p = −ê p | ∂ L has the value of − h M m for each of its DOFs, which gives The following energy conserving relation between the L 2 and R inner products in Equation (78) can be used to determine W * , Evaluating the left-hand side of Equation (84) gives Equating to the right-hand side of Equation (84) gives which results in Substituting y q from Equation (35) and e q = k w q gives Therefore, Equations (80) and (88) are the energy conserving interconnection relations between the two domains. The total Hamiltonian of the combined system is given by Finally, the canonical forms of Equations (47) and (73) can be combined to give the input-state-output PH form of the interconnected system, ⎡ where D p and D 0 p are the interconnection matrices of the skew symmetric system matrix. For skew-symmetry of the system matrix to hold the following must be true To verify that this skew symmetry holds we calculate D p and D 0 p by discretising Equations (80) and (88). Firstly, Equation (80) is discretised by multiplying on the left by the trial basis function, ψ q , and integrating over the boundary, as was done in the formulation of the boundary term in Equation (19), where the final step comes from the definition of B pb in Equation (48). Equating this with the boundary term in the second row of Equation (90) gives To discretise Equation (88), we first substitute Equation (83) and apply w = w T , because w is a constant function, Discretising q · n in the same way as e q · n was discretised in Equation (18) gives, where 1 T simply adds up the force contribution from each boundary DOF to calculate F b . Equating the boundary term in the fourth row of Equation (90) with the boundary term in the second row of Equation (73) gives Therefore, the discrete interconnected system retains a skew symmetric matrix, as required for a port Hamiltonian system. Equation (90) encompasses the canonical form, the dynamic equations and the constitutive law equations of the interconnected system.
It is important to note that if the right-side boundary condition, p R , was non-zero then there would be an extra boundary input and output in the canonical form of Equation (90). Also, if the top and bottom Neumann boundaries were non-zero conditions, the canonical form of Equation (90) would have interconnection inputs u int p and outputs y int q with the same definitions as the discrete version of Equation (35). A canonical form for the Neumann conditions would also have to be created with inputs/outputs for the boundary u q , y p and for the interconnection u int q , y int p with the same definitions as the discrete version of Equation (34). The two canonical forms could then be connected with Equation (50). Although the formulation of the interconnected canonical form seems complicated, it is not necessary for implementation. The full system canonical form is only discussed to reassure the reader that the total system is equivalent to a combination of canonical forms and therefore is a Stokes-Dirac structure and can have mixed causality boundary connections. The usefulness of the canonical forms, Equations (47) and (73), is that they allow calculation of the interconnection relations, which can then be implemented in FEniCS, as shown in Appendix C.
Interconnection Model Results
This section displays the results for the coupled wave-EM model for a sinusoidal input voltage. The time step for the simulations is 5×10 −4 s, using an SV time integration scheme. The problem is solved on both a simple rectangle domain with 21110 elements and a square domain with central input and 13067 elements.
Rectangle Domain
The input voltage condition for the rectangle domain is given by The p variable of the wave equation is displayed after 0.3 s and 1.1 s in Figs. 14 and 15, respectively. The initial large sinusoidal wave, which is due to the input over the first 0.25 s, flows through the domain as expected. This wave is followed by smaller repeating waves caused by the lingering oscillations of the EM system.
To confirm that the energy error of the model is bounded, the energy residual of each domain and the total energy residual is plotted in Fig. 16. The residual for each domain is Wave residual is the difference between the energy in the wave domain and the accumulated energy that has entered the wave domain through its boundaries. EM residual is the difference between the energy in the EM domain and the accumulated energy that has entered the EM domain through its boundaries the difference between the accumulated energy that has entered the domain and the internal energy in the domain.
As shown, the energy residual is bounded for increasing times, as expected for a symplectic time integration scheme. For a second-order symplectic method such as SV, the energy residual bound should be quadratically dependent on the time step, this relationship is proven in [18] and expressed more generally as To ensure that the bounded energy residual is indeed quadratically dependent on the time step, the energy residual maximum over a 20 s simulation is plotted against the step size in Fig. 17. As discussed in Sect. 4 The SM scheme can also be used to conserve quadratic invariants exactly. As shown in Fig. 17 we achieve an energy residual of < 10 −11 for a 20 s simulation with SM, confirming that our model, with an SM scheme, conserves energy exactly for linear, coupled LPM-continuum systems.
Square Domain with Central Input Boundary
A diagram for the domain of this section is shown in Fig. 18 and the input voltage is the same as Equation (98). Following the notation of the previous sections, the boundaries denoted At long times the uniform circular wave structure breaks down due to waves rebounding off the walls. However, the wave behaviour should still retain some structure, for example, there should be symmetry about the midline parallel to the x axis. Figure 21 shows p n after 8 s of simulation, once the uniform wavefronts have completely broken down. As can be seen, there is still symmetry about the midline parallel to the x axis, further showing this methods ability to accurately model the physical structure of the system. The energy residual for an 8 s simulation with the SV scheme is shown in Fig. 22. As in previous sections, the energy residual is oscillatory and bounded, as expected. Also, as in the previous sections, when using the SM method the energy residual is < 10 −12 , therefore, exactly conserved to within round-off error.
Conclusion
For the modelling of continuum systems, a port-Hamiltonian, Galerkin finite element method has been developed that, in general, has a bounded energy residual and linear longtime error growth for the state variables. This method allows mixed boundary conditions without the need for Lagrange multipliers or user-defined parameters. The discretisation is shown to be equivalent to a coupling of canonical port-Hamiltonian forms that allows Fig. 22 Energy residual vs time for the interconnected wave-EM simulation with a square domain, using Störmer-Verlet time integration. Total Residual is the sum of the energy residuals from the wave and EM domains. Wave residual is the difference between the energy in the wave domain and the accumulated energy that has entered the wave domain through its boundaries. EM residual is the difference between the energy in the EM domain and the accumulated energy that has entered the EM domain through its boundaries mixed interconnections with other canonical port-Hamiltonian models. The discretisation is also shown to be symplectic in both time and space. For our specific 2D linear wave equation system we also show exact energy conservation with a symplectic (implicit) midpoint method that guarantees conservation of quadratic first integrals. We also compare against an analytical solution and show standard order of convergence for the state variables with respect to the temporal and spatial discretisation. A modal analysis is performed and the eigenvalues are verified to be accurate. The boundary conditions are implemented in variational form, in a weak manner, without the need for penalty methods. In addition to this, the method is capable of monolithic coupling with arbitrary LPMs. The coupled model is shown to also have a bounded energy residual with a standard temporal order of convergence for the SV time integration scheme and exact energy conservation for the SM time integration scheme. The example model of a 2D linear wave equation coupled with an EM linear actuator is a good proof of concept for more advanced couplings between Hamiltonian PDEs and LPMs. Future work will be done on implementing control algorithms for coupled models, in order to improve control of multiphysics, multidomain, and non-linear problems.
Appendix A: FEniCS Code Example
To give the reader details on how the variational forms in Sect. 3 are implemented in FEn-iCS, this section shows code snippets for the SE scheme with weak form and strong form boundary conditions. The full code can be found at https://github.com/FinbarArgus/portHamiltonian_FEM.git Firstly FEniCS and the meshing software mshr are imported, then the domain shown in Fig. 2 is created with the following lines of code, from f e n i c s i m p o r t * i m p o r t mshr m a i n R e c t a n g l e = mshr . R e c t a n g l e ( P o i n t ( 0 . 0 , 0 . 0 ) , P o i n t ( L_x , L_y ) ) mesh = mshr . g e n e r a t e _ m e s h ( m a i n R e c t a n g l e , r e s ) where res is the resolution of the mesh. The function spaces, trial functions, test functions and functions for the DOFs at the previous and current time steps can then be created as follows, (100) The variational form in Equation (75) that allows the EM domain input boundary term, Equation (87), to be applied in FEniCS is where v y2 is the 2nd component of v y , the ODE test function vector. Substituting Equation (87) into Equation (101) and taking v y2 out of the integral, since it is a real function space that is constant over the boundary, gives Now, since the inside integral evaluates to a constant over the boundary, the outside integral can be evaluated to give v y2 L y ∂ L y q ds L .
Finally, substituting y q from Equation (35) and e q = k w q gives k w L y v y2 ∂ L (q · n)ds L .
The interconnection with boundary interconnection forms, Equations (100) and (104), can be implemented in FEniCS for the SE scheme with the following code, Wave domain stiffness k w 3.0 k g m s −2 Wave speed squared c 2 (k w /ρ w ) 1.5 m 2 s −2 Wave domain x-length | 14,582.4 | 2021-07-13T00:00:00.000 | [
"Mathematics"
] |
Optical Fiber-Tip Sensors Based on In-Situ µ-Printed Polymer Suspended-Microbeams
Miniature optical fiber-tip sensors based on directly µ-printed polymer suspended-microbeams are presented. With an in-house optical 3D μ-printing technology, SU-8 suspended-microbeams are fabricated in situ to form Fabry–Pérot (FP) micro-interferometers on the end face of standard single-mode optical fiber. Optical reflection spectra of the fabricated FP micro-interferometers are measured and fast Fourier transform is applied to analyze the cavity of micro-interferometers. The applications of the optical fiber-tip sensors for refractive index (RI) sensing and pressure sensing, which showed 917.3 nm/RIU to RI change and 4.29 nm/MPa to pressure change, respectively, are demonstrated in the experiments. The sensors and their optical µ-printing method unveil a new strategy to integrate complicated microcomponents on optical fibers toward ‘lab-on-fiber’ devices and applications.
Introduction
Optical fiber sensors have received remarkable successes in a wide range of applications-such as inertial navigational systems, environmental and structural monitoring, biochemical sensing, healthcare, food industry, and homeland security-because of their small size, electromagnetic interference (EMI) immunity, high sensitivity, remote sensing, and multiplexing capabilities [1][2][3]. Recently, with the development of micro-/nano-technology, optical fiber-tip sensors integrated with functional materials and microscale components have attracted considerable attention [4][5][6]. It is because an optical fiber end-face is an inherently light-coupled substrate [5], which provides an ideal platform for development of compact and highly integrated photonic devices and sensors stepping toward a new horizon of 'lab-on-fiber'.
A number of optical fiber-tip sensors with various structures and working mechanisms were proposed. For instance, one of the widely used structures in optical fiber-tip sensors is Fabry-Pérot (FP) interferometers, which are typically composed of a suspended diaphragm to form an FP cavity on optical fiber ends. Because of their simple structure and high sensitivity, FP cavity-based fiber-tip sensors have been intensively investigated for detection of various physical and biological parameters, such as pressure [7][8][9], temperature [10], acoustic wave [11], and refractive index (RI) [12]. If the reflectivity of the mirrors of such FP cavities is increased, optical microresonators can be formed on the end face of optical fiber for e.g., ultrasound sensing [13]. Moreover, localized surface plasmon resonance (LSPR) biochemical sensors were fabricated by patterning periodic gold nanodot arrays [14], and high-performance surface-enhanced Raman scattering (SERS) sensors were demonstrated by capping optical fiber end-faces with multilayer silver nanoparticles [15].
However, the challenge is that the tiny size and large aspect-ratio of optical fibers make the fabrication of optical fiber-tip devices difficult by using conventional microfabrication technologies. Although a diversity of fabrication techniques-such as photolithography [16], nanoimprinting [17], interference lithography [18], electron-beam lithography [19], focused ion-beam milling [20], multiphoton polymerization [21][22][23][24][25]-have been proposed to overcome this challenge, most of them have common drawbacks of being time consuming, having material specificity, and lacking flexibility.
Recently, we demonstrated that suspended-mirror devices (SMDs) can be directly fabricated on the end face of fiber-optic ferrules by using an optical 3D µ-printing technology [26]. However, such ferrule-top SMD sensors are still too large for applications where the sensors need to be deployed into very small space such as microfluidic channels and blood vessels. In this paper, we present an improved optical µ-printing technology to directly fabricate suspended-microbeams on the end face of a standard single-mode optical fiber. Figure 1a depicts the structural design and the working principle of the optical fiber-tip sensor based on suspended-microbeams. The suspended microbeam on optical fiber end-face forms a fiber-top air cavity. As a result, optical interference occurs between the light waves reflected from the interface between fiber end-face and air, and the interfaces between air and the two surfaces of the suspended microbeam. If the device is immersed into a liquid or gas whose RI is lower than the indices of glass and the polymer, it can be used to monitor the change of the RI of the liquid or gas through monitoring the shift of the interference fringe of reflection spectrum. As shown in Figure 1b, various suspended-microbeams with different geometries can be designed to meet the needs of diverse sensor applications. and high-performance surface-enhanced Raman scattering (SERS) sensors were demonstrated by capping optical fiber end-faces with multilayer silver nanoparticles [15]. However, the challenge is that the tiny size and large aspect-ratio of optical fibers make the fabrication of optical fiber-tip devices difficult by using conventional microfabrication technologies. Although a diversity of fabrication techniques-such as photolithography [16], nanoimprinting [17], interference lithography [18], electron-beam lithography [19], focused ion-beam milling [20], multiphoton polymerization [21][22][23][24][25]-have been proposed to overcome this challenge, most of them have common drawbacks of being time consuming, having material specificity, and lacking flexibility.
Recently, we demonstrated that suspended-mirror devices (SMDs) can be directly fabricated on the end face of fiber-optic ferrules by using an optical 3D µ-printing technology [26]. However, such ferrule-top SMD sensors are still too large for applications where the sensors need to be deployed into very small space such as microfluidic channels and blood vessels. In this paper, we present an improved optical µ-printing technology to directly fabricate suspended-microbeams on the end face of a standard single-mode optical fiber. Figure 1a depicts the structural design and the working principle of the optical fiber-tip sensor based on suspended-microbeams. The suspended microbeam on optical fiber end-face forms a fiber-top air cavity. As a result, optical interference occurs between the light waves reflected from the interface between fiber end-face and air, and the interfaces between air and the two surfaces of the suspended microbeam. If the device is immersed into a liquid or gas whose RI is lower than the indices of glass and the polymer, it can be used to monitor the change of the RI of the liquid or gas through monitoring the shift of the interference fringe of reflection spectrum. As shown in Figure 1b, various suspended-microbeams with different geometries can be designed to meet the needs of diverse sensor applications.
Materials
EPON resin SU-8 (Momentive Ltd., Waterford, NY, USA) was used in the fabrication of suspended-microbeams because of its good properties including highly transparent in both visible and near infrared band range, chemical resistance, and good mechanical strength. The refractive index of photopolymerized SU-8 at the wavelength of around 1550 nm is 1.57 [27].
Materials
EPON resin SU-8 (Momentive Ltd., Waterford, NY, USA) was used in the fabrication of suspended-microbeams because of its good properties including highly transparent in both visible and near infrared band range, chemical resistance, and good mechanical strength. The refractive index of photopolymerized SU-8 at the wavelength of around 1550 nm is 1.57 [27].
Optical 3D µ-Printing Processes
An in-house optical exposure setup, as shown in Figure 2a, was used to fabricate the optical fiber-tip sensors. The setup consists of a high-power UV source (365 nm), a UV-grade digital mirror device (DMD) for generation of optical patterns, projection optics for scaling down optical images, and a digital camera for machine vision metrology [28][29][30]. As it is a vitally important step to deposit uniform thin layers of SU-8, an ultrasonic nozzle was utilized to integrate the spray coating process with the optical maskless exposure technology to establish an optical 3D µ-printing technology. An xy-axis motorized stage was used to precisely align the optical fiber to right positions for UV exposure and SU-8 film deposition, respectively. The thickness of the single-layer SU-8 film can be tailored by adjusting the pumping rate of the syringe pump and the scanning velocity as well as the gas pressure associated with ultrasonic nozzle and the distance between the nozzle and substrate. In order to evaporate the solution after spray coating, ceramic heaters and thermal-couple were embedded in the mount of optical fiber to form a miniature integrated digital microheater.
Optical 3D μ-Printing Processes
An in-house optical exposure setup, as shown in Figure 2a, was used to fabricate the optical fiber-tip sensors. The setup consists of a high-power UV source (365 nm), a UV-grade digital mirror device (DMD) for generation of optical patterns, projection optics for scaling down optical images, and a digital camera for machine vision metrology [28][29][30]. As it is a vitally important step to deposit uniform thin layers of SU-8, an ultrasonic nozzle was utilized to integrate the spray coating process with the optical maskless exposure technology to establish an optical 3D µ-printing technology. An xy-axis motorized stage was used to precisely align the optical fiber to right positions for UV exposure and SU-8 film deposition, respectively. The thickness of the single-layer SU-8 film can be tailored by adjusting the pumping rate of the syringe pump and the scanning velocity as well as the gas pressure associated with ultrasonic nozzle and the distance between the nozzle and substrate. In order to evaporate the solution after spray coating, ceramic heaters and thermal-couple were embedded in the mount of optical fiber to form a miniature integrated digital microheater. To fabricate a 3D microstructure, as shown in Figure 2b, the optical fiber was firstly moved to a position below the ultrasonic nozzle for spray coating of a thin layer of SU-8. Then the film was in situ soft-baked to remove solvent. The soft-bake time was optimized according to the concentration of SU-8 solution and the film thickness. After soft baking, the sample was moved to the pre-aligned position for optical exposure, with the assistance of the digital camera-based machine vision metrology. Thereafter, the image data that was sliced from the CAD model of the 3D microstructure by self-developed add-on software was used to generate optical patterns to irradiate the SU-8 film on the optical fiber end-face. The typical exposure time was about 10 s which is associated with the power density of 35.86 mw/cm 2 . After exposure, the sample was post-baked in situ by using the integrated digital micro-heater. The processes were automatically repeated for the fabrication of the next layer of 3D microstructure. Finally, the sample was developed by using PGMEA and the developing time was about 15 min. To fabricate a 3D microstructure, as shown in Figure 2b, the optical fiber was firstly moved to a position below the ultrasonic nozzle for spray coating of a thin layer of SU-8. Then the film was in situ soft-baked to remove solvent. The soft-bake time was optimized according to the concentration of SU-8 solution and the film thickness. After soft baking, the sample was moved to the pre-aligned position for optical exposure, with the assistance of the digital camera-based machine vision metrology. Thereafter, the image data that was sliced from the CAD model of the 3D microstructure by self-developed add-on software was used to generate optical patterns to irradiate the SU-8 film on the optical fiber end-face. The typical exposure time was about 10 s which is associated with the power density of 35.86 mw/cm 2 . After exposure, the sample was post-baked in situ by using the integrated digital micro-heater. The processes were automatically repeated for the fabrication of the next layer of 3D microstructure. Finally, the sample was developed by using PGMEA and the developing time was about 15 min.
Fabrication Results
Figure 3a-c show the scanning electron microscope (SEM) images of three SU-8 suspendedmicrobeams fabricated on the end-faces of optical fibers. From the SEM images, the measured thicknesses of the three suspended-microbeams are 12.2, 1.0, and 6.9 µm, respectively, and the cavity lengths between the optical fiber and suspended-microbeams are 30.9, 15.6, and 40.4 µm, respectively.
Reflection Spectra
A broadband light source, a circulator, and an optical spectrum analyzer (OSA) were used to measure the reflection spectra of the fabricated optical fiber-tip FP micro-interferometers, as shown in Figure 4. Fast Fourier transform (FFT) of the measured optical spectra was calculated to analyze the cavity information of FP micro-interferometers. Reflection spectra and their FFT results of the optical fiber-tip FP micro-interferometers are shown in Figure 3d-f, respectively. It can be seen that the positions of their highest peaks in FFT results are well accordance with the length of the air
Reflection Spectra
A broadband light source, a circulator, and an optical spectrum analyzer (OSA) were used to measure the reflection spectra of the fabricated optical fiber-tip FP micro-interferometers, as shown in Figure 4. Fast Fourier transform (FFT) of the measured optical spectra was calculated to analyze the cavity information of FP micro-interferometers. Reflection spectra and their FFT results of the optical fiber-tip FP micro-interferometers are shown in Figure 3d-f, respectively. It can be seen that the positions of their highest peaks in FFT results are well accordance with the length of the air cavities shown in the SEM images. For the FP cavities with thick suspended-beams, as shown in Figure 3d,f, there are three peaks in the FFT results, which are consistent with previous SU-8 FP cavities fabricated on fiber-optic ferrules [26]. For the FP cavity with thin suspended diaphragm, as shown in Figure 3e, however, the peaks tend to merge into one peak, which avails to depress the fluctuation of inference fringe in reflection spectrum. The cavity lengths deduced from the main peaks of the FFT results are 29.3, 14.7, and 39.1 µm, respectively, which agree well with the counterparts measured by SEM images. Figure 3d,f, there are three peaks in the FFT results, which are consistent with previous SU-8 FP cavities fabricated on fiber-optic ferrules [26]. For the FP cavity with thin suspended diaphragm, as shown in Figure 3e, however, the peaks tend to merge into one peak, which avails to depress the fluctuation of inference fringe in reflection spectrum. The cavity lengths deduced from the main peaks of the FFT results are 29.3, 14.7, and 39.1 µm, respectively, which agree well with the counterparts measured by SEM images.
Refractive Index Sensing
One of promising applications of the optical fiber-tip sensors is to sense the refractive index of liquids. It is known that the wavelength of a resonance dip in the interference spectrum of an FP cavity can be expressed as where n is the refractive index of the medium in cavity, L is the cavity length, and k is the order of the spectral dip. Therefore, the tracked spectral dip will shift to longer wavelength when the refractive index of measurand liquid increases. For a small change of refractive index Δn, the shift of a specific spectral dip is ( ) The response of the fabricated optical fiber-tip sensor to the change of the RI of surrounding liquids was measured by using the setup shown in Figure 4a. CaCl2 solutions with different concentrations were used as the testing liquids, whose refractive indices were calibrated by using a commercial refractometer. After measurement of each sample, the sensor was rinsed with deionized water and then dried in nitrogen flow. The measured reflection spectra of the sensor in different liquid samples are shown in Figure 5. The spectral dip located at 1553.7 nm when the refractive index of solution is 1.3351 was monitored, as marked by the dash line. A red shift of the spectral dip was
Refractive Index Sensing
One of promising applications of the optical fiber-tip sensors is to sense the refractive index of liquids. It is known that the wavelength of a resonance dip in the interference spectrum of an FP cavity can be expressed as where n is the refractive index of the medium in cavity, L is the cavity length, and k is the order of the spectral dip. Therefore, the tracked spectral dip will shift to longer wavelength when the refractive index of measurand liquid increases. For a small change of refractive index ∆n, the shift of a specific spectral dip is The response of the fabricated optical fiber-tip sensor to the change of the RI of surrounding liquids was measured by using the setup shown in Figure 4a. CaCl 2 solutions with different concentrations were used as the testing liquids, whose refractive indices were calibrated by using a commercial refractometer. After measurement of each sample, the sensor was rinsed with deionized water and then dried in nitrogen flow. The measured reflection spectra of the sensor in different liquid samples are shown in Figure 5. The spectral dip located at 1553.7 nm when the refractive index of solution is 1.3351 was monitored, as marked by the dash line. A red shift of the spectral dip was observed with the increment of the refractive indices of the liquid samples. Figure 6 shows the wavelength shift of the spectral dip with respect to the RI of liquids. The sensitivity of the optical fiber-tip RI sensor was calculated by linear regression to be 917.3 nm/RIU, which is close to the theoretical value of 1159.4 nm/RIU predicted by using Equation (2). Compared with other optical evanescent field-based refractive index sensors [31,32], this open-cavity optical sensor has much higher sensitivity. Nevertheless, the spectral dip becomes shallower gradually with the increase of the liquid's RI because of the weakening of Fresnel reflections at the interfaces. Figure 6 shows the wavelength shift of the spectral dip with respect to the RI of liquids. The sensitivity of the optical fiber-tip RI sensor was calculated by linear regression to be 917.3 nm/RIU, which is close to the theoretical value of 1159.4 nm/RIU predicted by using Equation (2). Compared with other optical evanescent field-based refractive index sensors [31,32], this open-cavity optical sensor has much higher sensitivity. Nevertheless, the spectral dip becomes shallower gradually with the increase of the liquid's RI because of the weakening of Fresnel reflections at the interfaces. Figure 6 shows the wavelength shift of the spectral dip with respect to the RI of liquids. The sensitivity of the optical fiber-tip RI sensor was calculated by linear regression to be 917.3 nm/RIU, which is close to the theoretical value of 1159.4 nm/RIU predicted by using Equation (2). Compared with other optical evanescent field-based refractive index sensors [31,32], this open-cavity optical sensor has much higher sensitivity. Nevertheless, the spectral dip becomes shallower gradually with the increase of the liquid's RI because of the weakening of Fresnel reflections at the interfaces.
Gas-Pressure Sensing
The optical fiber-tip sensor can also be used to remotely monitor gas pressure in very small space. At room temperature (20 • C-25 • C), the RI of air is known as a function of pressure p (Pa) and temperature t ( • C) as [33][34][35] n air = 1 + 2.8756 × 10 −9 × p 1 The quadratic term p 2 can be ignored in case that the air pressure is below 1 MPa. If the cavity length is assumed as a constant, the wavelength shift of the spectral dip of k th order interference fringe with respect to the pressure change is thus where the coefficient α is The coefficient α is 2.679 × 10 −9 /Pa at room temperature. The above equations revealed that there is an approximately linear relationship between the wavelength shift of the optical fiber-tip sensor and air pressure at a constant temperature. The response of the optical fiber-tip sensor to the change of gas pressure was measured by using a gas chamber, whose gas pressure was controlled by a high-pressure nitrogen cylinder with gas flow regulator, as shown in Figure 4b. The gas pressure can be tuned from 0 to 700 KPa with a step of 50 KPa. A commercial pressure meter was used as to monitor the gas pressure in the chamber. Figure 7 shows the measured response of the optical fiber-tip pressure sensor to the change of gas pressure. With the increase of chamber pressure, a red shift of the spectral dip was observed because of the increase of refractive index of nitrogen gas. The optical fiber-tip sensor showed good linearity and reversibility with both increase and decrease of the gas pressure. According to the linear regression, the sensitivity of the optical fiber-tip sensor to the gas-pressure change is 4.29 nm/MPa, which is close to the theoretical value of 4.17 nm/MPa predicted by Equation (4). With the calculated noise level (i.e., 0.031 nm), the detection limit of the optical fiber-tip gas-pressure sensor is estimated to be about 22.2 KPa at a signal-noise ratio of 3 [36]. Compared with other diaphragm-based optical fiber-tip pressure sensors, this open-cavity optical sensor has relatively low sensitivity, but a wide measurement range, and thus is suitable for high air-pressure measurement applications.
increase of chamber pressure, a red shift of the spectral dip was observed because of the increase of refractive index of nitrogen gas. The optical fiber-tip sensor showed good linearity and reversibility with both increase and decrease of the gas pressure. According to the linear regression, the sensitivity of the optical fiber-tip sensor to the gas-pressure change is 4.29 nm/MPa, which is close to the theoretical value of 4.17 nm/MPa predicted by Equation (4). With the calculated noise level (i.e., 0.031 nm), the detection limit of the optical fiber-tip gas-pressure sensor is estimated to be about 22.2 KPa at a signal-noise ratio of 3 [36]. Compared with other diaphragm-based optical fiber-tip pressure sensors, this open-cavity optical sensor has relatively low sensitivity, but a wide measurement range, and thus is suitable for high air-pressure measurement applications.
Conclusions
In summary, we have demonstrated a miniature optical fiber-tip sensor by directly printing polymer suspended microbeams on the end face of standard single-mode optical fiber. The reflection spectra of the fiber-tip devices have been measured and used to analyze the Fabry-Pérot (FP) cavities formed by suspended microbeams. The optical fiber-tip sensors have been demonstrated to detect the RI change of liquid and the gas pressure of ambient environment, respectively. High sensitivities of 917.3 nm/RIU to RI change and 4.29 nm/MPa to gas-pressure change have been achieved. Such ultra-small optical fiber-tip sensors with remote sensing capability are very promising in microfluidic biosensing and environmental monitoring applications. | 5,016.2 | 2018-06-01T00:00:00.000 | [
"Physics"
] |
About a Class of Positive Hybrid Dynamic Linear Systems and an Associate Extended Kalman-Yakubovich-Popov Lemma
This paper formulates an “ad hoc” robust version under parametrical disturbances of the discrete version of the KalmanYakubovich-Popov Lemma for a class of positive hybrid dynamic linear systems which consist of a continuous-time system coupled with a discrete-time or a digital one. An extended discrete system, whose state vector contains both the digital one and the discretization of the continuous-time one at sampling instants, is a key analysis element in the formulation. The hyperstability and asymptotic hyperstability properties of the studied class of positive hybrid systems under feedback from anymember of a nonlinear (and, eventually, time-varying) class of controllers, which satisfies a Popov’s-type inequality, are also investigated as linked to the positive realness of the associated transfer matrices.
Introduction
Continuous-time and discrete-time positive systems have been studied in detail in recent years [1][2][3][4][5][6][7][8][9][10].In particular, if both the state and output possess such a property, the positivity is said to be internal or, simply, the system is positive.If the output possesses such a property, the system is said to be externally positive.Therefore, positive systems are intrinsically interesting to describe some problems like Markov chains, queuing problems, certain distillation columns, and biological and other physical compartmental problems where populations or concentrations cannot be negative [2,3].A related property is that time-invariant dynamic linear systems which are externally positive, while they have positive real or strictly positive real transfer matrices, are, in addition, hyperstable or asymptotically hyperstable, that is, globally Lyapunov stable for any nonlinear and/or time-varying feedback device satisfying a Popov's-type inequality for all time [11,12].Such a property of asymptotic hyperstability generalizes that of absolute stability [13][14][15], which generalizes the most basic concept of stability of dynamic systems.See, for instance, [13,14,[16][17][18][19][20][21][22][23][24][25][26][27][28][29] and references therein.The hyperstability property, which has a frequency-based physical interpretation in terms of positive realness of the transfer function of a feed-forward linear block, is also related to external positivity of the inputoutput relation rather than to (internal) positivity of the state-trajectory solution what is equivalent to positivity of the instantaneous input-output power and the input-output energy [2,3,13,15,30].It is well known that closed-loop hyperstability is, by nature, a powerful version of closedloop stability since it refers to the stability of an hyperstable linear feed-forward plant (in the sense of positive realness of the associate transfer matrix) under a wide class of feedback controllers applied.The above important properties make very attractive potential research issues for kind of more complex dynamic systems with applied projection including those lying in the class of continuous/digital hybrid systems.On the other hand, the class of hybrid systems consisting of continuous-time and discrete-time (or digital) systems are of an increasing interest since many existing industrial installations combine both kinds of systems.An elementary well-known case is when a discrete-time controller is used for a continuous-time plant.Another case is related to teleoperation systems where certain variables evolve in a discrete-time or digital fashion.A background literature and related relevant results are given in [1,7,11,16,17,26,31,32] and some of the references therein.The objective of this paper is to address appropriate versions of the Kalman-Yakubovich-Popov Lemma (KYP-Lemma) for a class of hybrid systems consisting of coupled linear continuous-time and digital dynamic subsystems, firstly proposed in [31], provided that they are, furthermore, positive [7], in the sense that, for any initial condition and any admissible controls both with nonnegative components, all the components of the state and output trajectory solutions are nonnegative for all time [33].General related results on positivity of wide usefulness are available in [34,35].
The paper is organized as follows.Firstly, a notation and terminology subsection is allocated below in this introductory section.Section 2 characterizes the class of hybrid systems dealt with and formulates with explicit results its positivity and some of its stability and asymptotic stability properties.A relevant auxiliary system for those studies is the so-called extended discrete hybrid system for which only the signals at sampling points are relevant and whose state is composed of both the digital substate and the discretized version of the continuous-time subsystem at sampling instants.Some of the obtained results display how the stability is kept under small coupling between the continuous-time and the discrete-time digital substates provided that the continuoustime and digital dynamics are stable.The section contains also controllability results provided that a nominal system version keeps that property.Section 3 is devoted to the continuous and discrete versions of the KYP-Lemma for a simplified version related to the relevant pairs of the system and control matrices and for the general version related to the whole state-space realization.The relationships between the positive realness of the transfer matrix to the state-space realization are characterized for both the positive extended discrete hybrid system and the whole hybrid system through the KYP-Lemma and Youla's factorization lemma.The obtained results are formulated in terms of robustness in the sense that the positive realness and the system's positivity of a nominal version of the hybrid system are kept under certain explicit conditions for the parametrical disturbances which deviate the hybrid system from its nominal parameterization.Section 4 relates the former results of positive realness and the hyperstability and asymptotic hyperstability properties of the auxiliary extended discrete hybrid system and to those of the whole hybrid system for the case when the plant input is got via feedback from a nonlinear and eventually time-varying device which satisfies a Popov's type inequality.Some further study is also provided in Section 5 related to the design of a stabilizing linear control scheme which either simply stabilizes the dynamics or improves its relative stability degree of the hybrid system in an internal control loop prior to the operation via any member of the given class of nonlinear and time-varying control controllers so as to ensure the hyperstability of the whole closed-loop system.Finally, conclusions end the paper.if it is of order × , with all its entries being nonnegative.R − = R/R + is the set of nonpositive real numbers.Note that R = R + ∪ R − and 0 ∈ (R + ∩ R − ).Vectors and matrices are nonpositive (being, respectively, in R − and ∈ R × − ) if they have nonpositive entries.Z, Z + , and Z − are the set of integer numbers and its subsets of nonnegative and nonpositive real parts, respectively.
(b) A matrix ∈ R × + is said to be positive (denoted by > 0) if it has at least a positive entry.A nonnegative matrix ≥ 0 satisfies either > 0 or = 0.A matrix ∈ R × − , which has at least a negative entry, is said to be negative and denoted by < 0 and, if all its entries are negative, then it is denoted by ≪ 0.
(c) A matrix ∈ R × + is said to be strictly positive (denoted by ≫ 0) if all its entries are positive.Similarly, a vector V ∈ R + is said to be positive (denoted by V > 0) if it has at least a positive component.It is said to be strictly positive (denoted by V ≫ 0) if all its components are positive.Also, the notations ≫ , V ≫ for matrices and vectors mean, respectively, − ≫ 0 and V − ≫ 0. Interpretations of expressions like > , V > , V ≥ follow directly from the above ones.
(d) We denote ≻ 0 (e) is the th identity matrix.(f) A matrix ∈ R × + is said to be stable, or a stability matrix, if its characteristic polynomial is Hurwitz or, equivalently, if all its eigenvalues have negative real parts.The matrix measure of the matrix (with respect to any norm) is () = lim →0 + ((‖ + ‖ − ‖‖)/).The spectrum of is the set of its eigenvalues (or spectrum) denoted by Sp and its characteristic polynomial denoted by () = Det( − ), where is a complex indeterminate and Det(⋅) stands for the determinant of the matrix (⋅).A subscript in the matrix measure (⋅) () denotes the measure with respect to a particular (⋅)-norm.A matrix ∈ R × + is said to be convergent (or Schur), if all its eigenvalues lie in the strict unity circle.An ∞ complex function is Schur if its ∞norm is bounded by unity while it is said to be strictly bounded real (SBR), if in addition its coefficients are real and its ∞ -norm is strictly bounded by unity. (g is the condition number of the matrix ∈ R × with respect to the -norm.It is infinity if and only if the matrix is singular.In particular, 2 () = ‖ 2 ‖‖ −1 ‖ 2 is the condition number of with respect to its ℓ 2 (or spectral) norm which is the quotient of its maximum and minimum eigenvalues in the case when it is square.
(i) and denote, respectively, the th column or row of the real -matrix, the superscripts "" and " * " denoting transpose and conjugate transpose, respectively. , being an integer number, denotes the th power of the -matrix and provided that = ( ), () = ( () ) is an associate matrix to defined as () = 1 if ̸ = 0 and () = 0, otherwise.Note that ≥ 0 ⇔ () ≥ 0. V denotes the th component of the real vector V and V ≥ 0 ⇔ V () ≥ 0. Thus, any positive system has always an associate positive system () which defines the pairwise relations input components/state-output components and state components/output components from its associate influence graph [2,3,5], by defining all its parameterizing matrices according to the above criterion.
(k) () is the unity vector of R whose unique nonzero component is the th one which is unity.
(l) The notation [] stands for a discrete/digital variable or vector which is only defined as sampling instants = , ∈ Z + , with being the sampling period.If is a digital variable then it is only defined at sampling instants.If is a discrete variable (i.e., that arising from the discretization of a continuous variable), then [] = () and any of both equivalent notations are used indistinctly in such a case.
(m) The superscript stands for the transpose of a vector or matrix while Ker() stands for the null-space of the operator .
Hybrid System and Positivity and Controllability Properties
Consider the subsequent hybrid linear system : (i) and are the matrix of continuous-time and of digital dynamics, respectively, and and are, respectively, the matrices of dynamics of couplings between the digital and continuous-time substates and continuoustime discretized and digital substates.The matrix is the matrix of dynamics of coupling between the sampled continuous-time substate to its time evolution over the next sampling interval.
(ii) and are continuous-time and digital control matrices and is a coupling control matrix from the sampled continuous-time control to the next intersample period continuous-time substate.
(iii) The matrices , , and and in (1c) are the various output and input-output interconnection matrices generating the output of the hybrid system from its continuous-time substate, its discretized value at sampling instants, the digital substate, and the continuous-time input and its sampled value.
The orders of all the real constant system parameterizing matrices displayed in (1a), (1b), and (1c) agree with the corresponding dimensions of the continuous, discrete, and digital substates (), [], and [] and inputs and outputs.Note that the hybrid system is driven by the control () and by its samples () of period acting as two independent control actions.At sampling instants, it follows by direct calculus from (1a), (1b), and (1c) that the hybrid system is described by the following = + th order extended discrete-time system of sampling period driven by a fictitious extended input sequence {V[]} ⊂ R + whose element V[] depends on : [, ( + 1)] → R and since only finite input jumps happen at sampling instants, since impulsive jumps are not considered, V[] depends on : [, ( + 1)) → R since the updated value [ + 1] at = ( + 1) does not contribute to V[] : ] for any integer ≥ 0, where where V[] ∈ R + .The derivation of the extended discrete , (2a), (2b), and (2c), subject to (3)-( 8), from the hybrid system , (1a), (1b), and (1c), is direct from a time-integration of (1a), (1b), and (1c) on a sampling time interval [, ( + 1)) with initial conditions at = .The following positivity result holds as a direct extension from the SISO (single-input single-output case) hybrid parameterization of [7,11,31].
Theorem 1.The system is positive if and only if ∈ Under the above given conditions, ∈ R for ≥ 0, if : [0, ∞] → R , and then the extended discrete system is also positive.
Theorem 2. The following properties hold:
(i) Assume that (a) ∈ Then, is convergent and the unforced is globally asymptotically stable.
(2) ‖ ‖ 2 ≤ , ‖ ‖ 2 ≤ , and ‖ ‖ 2 ≤ for some ∈ [0, * ), where and are convergent and Proof.First note that > 0 and is convergent if and only if (−) is nonsingular and (−) −1 > 0 [9].Direct calculations with (3) and the inverse of a 2 × 2 block partitioned matrix [36] yield in this case where As a result, there exists ( − ) −1 with > 0 and ( − ) −1 ≥ 0 which holds if and only if is convergent so that the unforced is globally asymptotically stable.Property (i) has been proved.To prove Property (ii), note that > 0 and is convergent, so that ( − ) is a nonsingular -matrix and ( − ) −1 > 0 exists.Since ( − ) is a nonsingular matrix, all its leading minors are positive.Thus, ( − ) ≻ 0, then ( − ) ≻ 0 and ( − ) ≻ 0 and, equivalently, ( − ) −1 ≻ 0 and ( − ) −1 ≻ 0. On the other hand, one has Since ≥ from the hypotheses, ( − ) −1 ( − ) ≥ 0 and if To prove Property (iii), note that since is Hurwitz and is convergent, = 0 + Ã = 0 ( 2 + −1 0 Ã), where with Since is Hurwitz and and 0 are convergent, there exists a real constant ≥ 1, which is norm-dependent such that for all ≥ 0 with − 1 < 0 being not less than the stability abscissa of and 1 being not less than the convergence abscissa of .Since max(‖ ‖ 2 , ‖ ‖ 2 , ‖ ‖ 2 ) ≤ , one gets from (15) that [37] and then convergent since 0 is convergent from the continuity of the eigenvalues of matrix with respect to its entries.Then the unforced is globally asymptotically stable and the unforced is also globally asymptotically stable since is Hurwitz and [] → 0 as → ∞ for any given initial condition.Property (iii) has been proved.Property (iv) follows by redefining = 01 + Ã1 = 01 ( 2 + −1 01 Ã1 ) and then Related to Theorem 2, note that is convergent by construction if is Hurwitz and a guaranteed upper-bound of ‖ ‖ is sufficiently small which increases as the sampling period and the stability abscissa of increase.The next result generalizes Theorem 1 if is not necessarily convergent and is not necessarily Hurwitz.
Example 3. Consider a positive hybrid system with scalar continuous-time and digital subsystems and On the other hand, hypothesis + ( − ) −1 being nonnegative and convergent holds if for some while for some 2 ∈ (0, (1 − − )(1 − / )).Particular numerical values which satisfy all the given joint constraints are, for instance, = 0.1, so that the stable continuous dynamics has a small relative stability, = 0.1, = 0, so that the digital dynamics has a maximum stability degree, and the forced system behavior is independent of the digital self-dynamics, = 0.995, = 0.009.
Corollary 4. The following properties hold:
(i) is convergent, and then the unforced is globally asymptotically stable, if (1) 0 , (12), is nonsingular and there exists If, in addition, is Hurwitz, then the unforced is globally asymptotically stable.
(ii) A sufficient condition for Property (i) to hold is where ∈ R is the stability abscissa of .
Proof.Note that, since 0 is nonsingular, and is nonsingular from Banach's Perturbation Lemma, under the condition Property (i) follows directly from (24).Property (ii) follows from the fact that ( 22) is a sufficient condition for ‖ Ã‖ 2 < ( 1 − ||)‖ 0 ‖ 2 in view of the first identity of (13).
The following theorem refers to "controllability" as the property of controllability to the origin and to "reachability" as that of controllability from the origin.Note from (2a), (3), and ( 6)-( 8) the structure of matrix that 0 [] + [] = V V[] = [] leading to the state system description driven by a real vector sequence: for any integer ≥ 0, where V is reparameterized to some appropriate matrix so as to drive the auxiliary control for some given prefixed × -matrix = ≥ 0, and (26 The system is controllable if, furthermore, rank ( , ) = .
(iv) The system is reachable if it is controllable and, furthermore, is nonsingular (in particular, if Property (iii) holds and, furthermore, is nonsingular and The system is reachable if is reachable and is nonsingular.
Proof.One gets by direct recursive calculation from (25a) x provided that the input is generated from for ∈ (0, ) ; = 0, 1, . . ., 2 − 1 and the system is then controllable.This proves the sufficiency part.The necessity part follows from (28) written in the equivalent form: If rank (, ) < 2, then, given [], there exists . Thus, the following linear algebraic system of equations resulting from ( 31) is an incompatible one from Rouché-Froebenius theorem of Linear Algebra.This leads to the proof of the necessity part of the first part of Property (i).Then, system is controllable if and only if (, ) is full rank.On the other hand, if the pair (, ) is not controllable while the pair ( , ) is controllable, the system is approximately controllable with state targeting error Property (i) has been proved.On the other hand, note from (1a) that is nonsingular, ∀ ∈ Sp, where so that is controllable from (36), since which is guaranteed if Condition (40) holds if ∈ [0, √ 2 + −1 −).This guarantees that rank (, ) = 2 and is controllable.Since rank ( , ) = , the system is controllable from Property (ii).Property (iii) has been proved.
To prove Property (iv) note that reachability of the discrete is guaranteed from controllability to the origin and the nonsingularity of its matrix of dynamics .Those conditions are guaranteed from the conditions of Property (iii) if is nonsingular and Note that if (26) is tested for ∈ (Sp ) ∩ { ∈ C : || ≥ 1} (i.e., for the unstable and critically stable modes of ), then it becomes a stabilizability test of the current provided that the nominal is stabilizable.In other words, stabilizability is the property implying that any uncontrollable mode is asymptotically stable while any unstable or critically stable mode is controllable.
The Kalman-Yakubovich-Popov Lemma
The following technical result will be then used for deriving a simplified but useful version of the KYP-Lemma (see [8,37] and references therein) for the given system in the event that the output matrix is identity and the input-output interconnection matrix is zero.Lemma 6.The following properties hold: ] ⪯ 0 for some = ∈ R (3+)×(3+) and all ∈ [0, ∞], where = + Ã and = + B .
Remarks 7.
(1) Note that Lemma 6(i) does not require for A to be a convergent matrix (i.e., a stability matrix on the discrete framework) while has to be a convergent matrix.Conversely, Lemma 6(ii) does not require for to be a convergent matrix while is a convergent matrix. ( Ã and G() = ( 2 −) −1 Ã are SBR, then and are convergent matrices and the identities lead to (3) The identities (3) Constraint (26) holds and there exists a matrix = ∈ R (3+)×(3+) , which is nonnegative in all entries except for the last 2 diagonal elements, such that (42) holds for all ∈ [0 , ∞].
The following result is concerned with the positive realness of a discrete nominal transfer matrix of the extended discrete nominal which guarantees that of the transfer matrix of a parametrical disturbed under a set of structured parametrical perturbations of the dynamics, output, control, and interconnection matrices.The result is based on the equivalence between the positive realness of a transfer matrix and the associated state-space realization, namely, the Positive Real Lemma, so-called alternatively Kalman-Szëgo-Popov Lemma or KSP Lemma, being a discrete version of the KYP-Lemma and of those ones with the Discrete Positive Factorization Lemma (so-called alternatively Youla's Factorization Lemma) Theorem 10.Assume that = + , , , and are positive, is positive, and the transfer matrix 1 () = () − (/2) is positive real for some real constant ∈ R + , where () = ( 2 − ) −1 + is strictly positive real, and that the triple ( , , ) is controllable and observable.Assume that the parameterizing matrices of the (25a), (25b), (25c), and (25d) are subject to parametrical disturbances so that with the disturbance matrices being subject to à ≥ − , B ≥ − , C ≥ − , and D ≥ − .Assume, furthermore, that ( , ) ( , ) and (, ) (, ) are monomial.Then, both and are positive while the following properties hold: (i) The transfer matrix () = ( 2 − ) −1 + is strictly positive real (then is convergent), , , , > 0 and 1 () = () − /2 is positive real if there exist matrices K ∈ R × , L ∈ R 2× , where is some arbitrary positive integer, satisfying the following set of matrix relations: for some ∈ (0 , 1] and the given ∈ R + , for some existing matrices which satisfy the following set of matrix identities: are positive since ( , ) ( , ) and (, ) (, ) are monomial so that the sequences { []} and { []} are nonnegative for any nonnegative control.Note also that from the conditions on the parameterizing matrices both extended discrete systems describing the given hybrid system are positive.Note that is at least critically stable although nonnecessarily convergent.Note also that if > 0 then () is strictly positive real and if = 0 then it is positive real if A is at least critically stable (rather than convergent) with eventual simple poles of positive semidefinite on || = 1.Note that Re( ( ) + Re ( − )) − ⪰ 0 for any ∈ [0, ∞] since () − (/2) is positive real and = + .From the equivalence between the Discrete Factorization Lemma and the Discrete Positive Real Lemma [38], there exist a positive definite real matrix , which is diagonal since is positive and convergent [3], and real matrices , , and ≻ 0 such that the matrix relations (61) hold implying from the Discrete Positive Factorization Lemma that where so that the following factorization holds: where 1 ( ) = + ( 2 − ) −1 .Thus, by invoking similar arguments of the equivalence between both lemmas, () = ( 2 − ) −1 + − (/2) is positive real for some given ∈ (0, 1], if and only if, there exist a diagonal positive definite real matrix and real matrices , , and ≻ 0, subject to P ≻ − , Q ⪰ − , satisfying such that the following matrix relations hold: Now, direct calculations show that (61) guarantee ( 68 54) and by noting that it has negative entries except the last = + diagonal entries because the entries of ̸ = 0 are nonnegative since the system is positive.So, the state-space realizations of 1 () and () do not fulfill Lemma 6 for which has negative off-diagonal entries.A similar conclusion follows for 1 () and ().
Remark 11.
(1) Note that in Theorem 10(ii) and can be critically stable, since (), 1 (), (), and 1 () are positive real, so that they can eventually possess simple eigenvalues, such that the four resulting matrices ( ) + ( − ), with being any of the four above ones, have positive semidefinite residuals at such simple critical poles.
(2) Note that if the nominal extended discrete system is positive and Theorem 10 holds, then the extended discrete system is positive and it is also positive in the input-output positivity (or "passivity") sense of [13,15] (see also [30]) since positive realness of transfer matrices is equivalent in the discrete-time domain to ) and ( , ) ≥ 0 on any discrete-time interval [ 0 , 1 ] with 0 ≥ 0. In particular, for any integers 0 ≥ 0 and 1 ≥ 0 with = − | P| and a close relation for the nominal with L = 0, K = 0, P = Q = 0.
(3) Usually, the positive real and positive factorization lemmas are stated for minimal (i.e., simultaneously controllable and observable) state-space realizations in order to exclude from the analysis eventual unstable and critically stable (in the nonstrict positive realness case) zero-pole cancellations in the transfer matrices [13,15].The intuitive reason is that the state-space realization is got as a minimal one from the given transfer matrix so that it does not give information about eventual cancellations removed from the transfer matrices and its implication in the statespace descriptions when dealing with the Continuous or Discrete Positive Real Lemmas or their equivalent Youla's Factorization Lemmas.
Theorem 10 states a characterization of the admissible structured perturbations for the dynamics, output, control, and interconnection matrices of a state-space realization associated with the discrete nominal positive real transfer matrix which guarantee that the perturbed system being positive maintains the positivity and the positive realness property of the nominal .Based on the Discrete Positive Real Lemma without invoking the factorization result, we now establish a parallel result to be applicable for nonstructured parametrical disturbances at the expense of testing the positive definiteness of an associated matrix.
Theorem 12. Assume that the hypothesis of Theorem 10 holds for the parametrical disturbance matrices.Then, both and are positive and the following properties hold: (i) The transfer matrix () = ( 2 − ) −1 + is strictly positive real (then is convergent), , , , > 0, and 1 () = () − /2 is positive real if there exist matrices satisfying the following set of matrix relations: for some ∈ (0 , 1] and the given ∈ R + , for some existing matrices and ∈ R × , which satisfy the following set of matrix identities: Furthermore, satisfies (71): for any integers 0 ≥ 0 and 1 ≥ 0 with = − | P| and a close relation is satisfied by the nominal with L = 0, K = 0, P = Q = 0.
(iii) If ⪰ 0, Q ≻ − , then () is positive real, () is strictly positive real, and 1 () is positive real even if is critically stable.
Example 13.A particular system of the studied hybrid class with = 4 is now discussed with some of the parameters being fixed "a prior" while others are primarily left undetermined in order to find the needed positive realness conditions.Consider the following the hybrid system (1a), (1b), and (1c) with = = 2, = = 1, = 0.01, = = (1, 0) ; which leads to the following matrices: If = 1, note that is Hurwitz and Metzler and then Φ (0.01) is positive and convergent.If ≥ 0, > 0, and > 0, then both the continuous-time subsystem and its discretized version to any sampling period are positive dynamic systems.Also, is convergent.The transfer function of the uncoupled continuous-time subsystem is where is the Laplace transform argument.It can be easily checked that the continuous transfer function is positive real if 1 = 2 = 0; 0 ≤ = 1 2 ≤ 3 and > 0. If 2 > 0 and 1 > 0 with 1 = 2 = 0, then the triple ( , , ) is controllable and observable.If, in addition, > 0, then () is strictly positive real.If, furthermore, > 0 or < 3, then () is strongly strictly positive real in the sense that Re On the other hand, note that the uncoupled continuoustime subsystem is a mathematical model for some wellknown linear dynamic systems as a damped mechanical system, or an RLC electric circuit, described by the differential equation: forced by a term () calculated from a primary control () which is everywhere piecewise continuous if = 0 and everywhere twice continuous-time differentiable with piecewise continuous second time-derivative if > 0. Note that if ‖ ‖ 2 < 0.581976, then being Hurwitz guarantees that is convergent from the fulfillment of the stability constraint , where is the discrete transfer function argument representing in the time delay a one-step advance operator , and equivalently, −1 is a one-step delay operator formally equivalent to −1 .It can be directly checked that the digital transfer function is strictly positive real with Re ( ) ≥ = 1 for ∈ [0, 2].The extended has four stable eigenvalues, namely, 0.367879, 0.135335, 0.930074, −0.430074 in the free coupling case, that is, if = 0, = 0.By using a similar reasoning to that guaranteeing that being Hurwitz implies that is convergent, one concludes that the system matrix of the is convergent if the sufficiency constraint below holds: (iii) Assume that 1 () is strictly positive real for some ∈ R + .Then, the closed-loop keeps the asymptotic hyperstability property from that of the nominal in the sense that Property (i) holds and, furthermore, {‖[]‖ 2 }, {‖ []‖ 2 }, and {‖[]‖ 2 } converge asymptotically to zero for any given initial condition [0].
Proof.One gets from ( 86) and ( 89) for any integers 0 ≥ 0 and 1 > 0 with the fourblock partitioned matrices of (90a) being at least positive semidefinite (see Theorems 10 and 12).Since + P ≻ 0, the sequence {‖[]‖ 2 } is uniformly bounded for any given initial condition [0] and any nonlinear eventually time-varying controller satisfying Popov's inequality (89).Thus, the current closed-loop system is hyperstable as it is the nominal one.If ( , ) is controllable and (26) holds, then (, ) is controllable for the parameterizations defined in (25c) and then the uniform boundedness of {‖[]‖ 2 } implies that of {‖[]‖ 2 }.Property (i) has been proved.Under the conditions of Property (ii), since M1 ≻ 0, so that ≻ 0 and ≻ 0 (equivalently, ≻ 0), and M1 ⪰ − M1 , so that Q ≥ − , R ⪰ − , then M1 = M1 + M1 ≻ 0 and ≻ 0 and P ⪰ − , then it follows from the boundedness of the second inequality of (89) for all 1 > 0 that the sampled state and input of the nominal and current converge asymptotically to zero without the need for any controllability assumption.This proves Property (ii).Under the additional conditions of Property (iii), is convergent and {[]} → 0 from the first identity of (68) which is a discrete Lyapunov matrix equation.Note that (74)-(76) hold with ≻ 0 and Q ⪰ − (from the strict positive realness condition) and ≻ 0 as well as ≻ 0 (since R ⪰ − ) and furthermore, ≻ 0 and ≻ 0.
Otherwise, lim →∞ Re( 1 ( ) + 1 ( − )) = 0 and the nominal 1 () would not be strictly positive real as it would happen with the disturbed transfer matrix.This implies that {[]} → 0 since otherwise lim 1 →∞ ∑ 1 0 [][] would be infinity from the system positivity.A similar argument concludes that {[]} → 0, since ≻ 0, without requiring a controllability condition, since any eventual zero/pole cancellation in the transfer matrix is necessarily strictly stable (since the transfer matrix 1 () is strictly positive real) so that any eventual uncontrollable mode is asymptotically stable.This proves Property (iii).To prove Property (iv), note that if M1 ⪰ 0 with ≻ 0 and Q ≥ − , then 1 () is positive real.From the system positivity and the finite upperboundedness of (89), lim We now introduce the concept of asymptotic hyperstability in the mean in the sense that the system is globally stable and, furthermore, the input and output power (and then its inputoutput instantaneous power) converge asymptotically to zero except eventually for a set of time instants zero measure.
(2) The nonlinear and eventually time-varying controller ((), ) is everywhere piecewise continuous with respect to and continuous with respect to in R + × R + .
Note from (100) that since ( ) → 0 as → ∞, since can be taken to be arbitrarily small, and simultaneously since the closedloop is asymptotically hyperstable since the matrices , , and , , , are positive definite matrices for all ∈ Z + , then { [ +ℓ + ] } → 0, and {[ + ℓ + ]} → 0 as it follows from the direct extension Theorem 16(iv) to timevarying parameterizations under sufficiency type conditions of asymptotic hyperstability, provided that = and is nonsingular for all ∈ Z + [13].Since the state, input, and output have nonnegative components, one also gets that () → 0 and () → 0 as → ∞ except, eventually, at isolated time instants where the nonlinearity ((), ) is not continuous.
If the linear part of the system is not positive real, or even stable, or if it is suited to improve its relative stability, a linear feedback law can be injected prior to the operation by the nonlinear device towards the achievement of positive realness or strictly positive realness of the transfer matrix describing the linear feed-forward block.In particular, assume that following state-feedback linear control law is given: ( The following result is related to the achievement of the hyperstability of the closed-loop extended discrete hybrid system under the given control law as well as the asymptotic hyperstability in the mean of ℓ . Theorem 22. Assume that ( , ), ( , ), ( , ), and ( , ) and ( , ) are controllable pairs.Then, an appropriate feasible parameterization of the control gains in the fictitious discrete control law ( 92)-( 93), with the replacement (⋅) → (⋅) and the system closed-loop reparameterization (103), associated with the feedback control law (101), may lead to a positive real transfer matrix of the positive closed-loop system ℓ and then its hyperstability if () = −((), ) for any nonlinear and eventually time-varying nonlinearity ((), ) which satisfies Popov's type inequality.If, furthermore, ℓ is asymptotically hyperstable and the assumptions ( 2) and ( 3) of Theorem 21 hold, then the closedloop hybrid ℓ is asymptotically hyperstable in the mean.
Proof.We refer with superscript bars to any matrices for either the parameterization or the Positivity Real Lemma (, , , , and ) after performing the control law (101).Note that, under controllability of any pair (, ), it is possible to choose a state-feedback control gain for the achievement of any given arbitrarily prescribed stable closed-loop placement.Since ( , ), ( , ), ( , ), and ( , ) and ( , ), so ( + , ), are controllable pairs, it is feasible to choose the control gains and in such a way that , and then , and have stable eigenvalues being as largely dominant, related to the spectral norms of and , as possible via the choices of and so that the dynamics of the closed-loop extended discrete hybrid system be a convergent matrix.On the other hand, one can choose the -matrix of sufficiently small nonnegative entries so that and , and then in the second constraint of the Discrete Positive Real Lemma has a sufficiently small spectral norm related to that of while + is dominant norm of order (‖ ‖ 2 ) over that of , of order (‖ ‖ 2 2 ), so that ⪰ 0 and ⪰ 0. In this way, the discrete modified closed-loop transfer matrix of , related to the new input (), might be designed to be at least positive real.On the other hand, the asymptotic hyperstability in the mean of ℓ follows from Theorem 21 from the asymptotic hyperstability of ℓ and the assumptions (2) and (3) of Theorem 21 since the first assumption of such a theorem holds since the controllability of the pair ( , ) implies that of the pair ( , ).
Conclusions
This paper has investigated a class of hybrid systems dealt with and characterized with explicit results its positivity and some of its stability properties.The hybrid system consists of a dynamic system which has a continuous-time substate and a digital one with mutual coupled dynamics.An extended discrete hybrid system which describes any hybrid system in the given class at sampling instants is investigated to establish the stability and controllability properties of the discretized system.The state of the extended discrete hybrid system contains the discretized substate of the continuous-time subsystem at sampling instants and the digital substate.The paper studies the stability and controllability, in a robustness context for parametrical disturbance, of such an extended discrete system whose state is defined by both the digital substate and the discretized version of the continuous-time subsystem at sampling instants.Two discrete versions of the KYP-Lemma are given for (a) a simplified version of the hybrid system related to the relevant pairs of the system and control matrices and (b) for a more general version of such a lemma related to the whole state-space realization involving the output and input-output interconnection matrices as well.The relationships of the positive realness of the transfer matrix to the state-space realization are explicitly characterized related to the discrete KYP-Lemma and Youla's factorization Lemma.The obtained results on positive realness are related to the hyperstability and asymptotic hyperstability properties of the hybrid system for any member of a class of nonlinear and perhaps time-varying controller device satisfying Popov'stype inequality.Finally, some extensions are given for the case where there is a supplementary stabilizing linear control scheme which stabilizes the dynamics hybrid system prior to the nonlinear and time-varying control law operation to establish the hyperstability of the closed-loop system.
1. 1 .
Notation and Terminology.(a) R + is the set of nonnegative real numbers; R + ( being a positive integer) is the Cartesian product times of R + .The vector function V() ∈ R + for some ≥ 0 if all its components are nonnegative at .The matrix ∈ R × +
)
Theorem 5. Define à = − and B = − such that ( , ) is a nominal controllable pair and = { ∈ ≤ , ∀ ∈ R ×( +) }, where the control matrices of the nominal , parameterized by and , and the current systems are those of the parameterization (25c) of (25a).Then, the following properties hold:(iii) The system is controllable if rank ( , ) = 2 and (i)The system is controllable if and only if (, ) is full rank.If (, ) is not full rank then there exists a control sequence such that the system is approximately controllable with state targeting error 0[max(‖ − ‖ 2 , ‖ − ‖ 2 )].(ii)The system is controllable if and only if rank (, ) = 2 and rank ( , ) = . | 9,286.2 | 2017-12-13T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Molecular Dynamics Simulation and Analysis of Crystallization System in PVC Tunnel Drain-Pipe at Different Temperatures
Corresponding Author: Xuefu Zhang School of Civil Engineering, Chongqing Jiaotong University, Chongqing, 400074, China Email<EMAIL_ADDRESS>Abstract: Aiming at the problem that carbonate and calcium ions are easy to crystallize and precipitate in the drain-pipes of limestone tunnels and then adsorb on PVC pipes in Western China, a "solid-liquid interface model" for the interaction between PVC materials and ions in aqueous solution is established by using the molecular dynamics simulation software materials studio and the microscopic mechanism of crystallization system in PVC drain-pipes at different temperatures is studied. The results show that the selfdiffusion coefficient of water molecules increases with the increase of temperature. In the temperature range of 283-308 k, the self-diffusion coefficient of calcium ion and carbonate decreases first, then increases and decreases again with the increase of temperature. When T = 303 K, the selfdiffusion coefficient of both ions reaches the maximum value, which makes it easier to crystallize. With the increase of temperature, the binding energy between ionic solution system and PVC increases at first and then decreases. The research results and methods have important guiding significance for the prevention and control of crystal blockage of limestone tunnel drainage pipe.
Introduction
With the development of traffic planning in central and western regions of China, the importance of tunnel construction has been paid more and more attention. There are many limestone tunnels in Southwest China. The water in this limestone geological environment contains nearly saturated bicarbonate and calcium ions. The scaling ions flow into the drainage pipeline along with the underground water and form calcium carbonate crystal in the tunnel drainage system and deposit in the drain-pipe (Chen et al., 2019;Wu et al., 2019;Al Nasser and Al Salhi, 2015). It is easy to cause the blockage of tunnel drainage system, causing the rupture and leakage of tunnel lining, the argillization of surrounding rock structural plane weak interlayer due to water erosion and the cracking under freeze-thaw cycle of drainage pipeline and lining in alpine region.
Most of the research on tunnel drainage is mainly reflected in discovery and treatment (Lee et al., 2012;Jung et al., 2013) and the research on pipeline crystallization scaling is less. As early as the end of 1950s, Kern put forward the mass balance model, believing that scaling can be divided into deposition and removal and the difference between them is the amount of scaling (Kern, 1959). Then, (Taborek et al. 1972) established a prediction model based on the mass balance model and considered that the surface reaction was the main factor to control the scaling deposition. Hasson et al. (1968) studied the mechanism of CaCO3 scale deposition on the surface of heat exchanger by water turbulence containing dissolved scale component under nonboiling condition and proposed a CaCO3 scale deposition rate model. Since the 21st century, the research on crystallization of calcium carbonate in aqueous solution and pipeline scaling has entered the stage of numerical model research through geochemical calculation software such as PHREEQC. For example, (Tao et al., 2007) used PHREEQC to simulate and analyze the uranium leaching agent of groundwater in a high salinity area in China and determined the critical value related to hydrogeochemical conditions of crystallization precipitation through the basic conditions of crystallization, such as saturation index. Zhu analyzed the effects of pH, Ca 2+ and bicarbonate on the dissolution and precipitation of calcium carbonate. Chen studied the influence of activity product, solubility product and supersaturation index on the crystallization process through related crystallization experiments and PHREEQC. Some scholars also use molecular dynamics to analyze the adsorption behavior of molecular ions. Hu respectively studied the adsorption behavior of protein on the surface of PEG and PDMS polymer antifouling films. Huang et al. (2005) simulated and analyzed the interaction between Polyvinylidene Fluoride (PVDF), Polytrifluorochloroethylene (PCTFE), Fluororesin (F2314), Fluororubber (F2311) and 1,3,5-Triamino and 2,4,6-Trinitrobenzene (TATB) and obtained the binding energies between four fluoropolymers and TATB. The binding energies of four fluoropolymers with TATB were obtained. Tian (2008) discussed the scaling process of Fe (001) crystal face and (Kong and Wang, 2016) analyzed the adsorption behavior of Cu (II) on hydroxylated kaolin (001) surface in aqueous environment by molecular dynamics.
It can be seen that the microscopic study of tunnel drain-pipe wall on the crystallization of scaling ions in aqueous solution on the pipe wall surface is still lacking, especially on the interaction between PVC materials and ions in aqueous solution. In general, a large number of systematic experiments are needed to study. The experiments usually need complete equipment, timeconsuming and laborious. Moreover, it is not possible to study the related properties between the two interfaces from the microscopic point of view (Kirkham et al., 2020;Chihi et al., 2020). Therefore, molecular dynamics method is used in this study to study the adsorption properties of PVC materials to ions in aqueous solution and the "solid-liquid interface model" is used to explain or predict the formation of crystallization system in PVC drainage pipe. Meanwhile, the micro mechanism of crystallization in limestone tunnel drain-pipe is studied. The results and methods have important guiding significance for the prevention and control of crystallization blockage of limestone tunnel drain-pipe.
Construction of Interface Model between PVC Layer and Solution
The process map of construction and method of numerical model are shown in Fig. 1. For the construction of PVC layer polymer, the larger the polymerization degree of molecular chain is, the closer to the real model and the more accurate the simulation results are, but this requires higher computer configuration and the longer the actual simulation process will take (Emery et al., 2020;Lewington et al., 2020;Schmidt et al., 2020). Therefore, the appropriate degree of polymerization is needed in the actual model calculation. In this study, considering the accuracy and time of calculation, the polymerization degree of 10 is selected. PVC chain is established by materials visualizer module in Materials Studio software. The molecular formula is shown in Fig. 2a and the single chain model is shown in Fig. 2b. Through Modules->Amorphrouscell->construction module, the amorphous three-dimensional structure of 10 chains is established. The size of the three-dimensional crystal cell structure is 19.5172×19.5172×19.5172 Å 3 and the crystal cell structure is shown in Fig. 3a. Similarly, the visualizer and amorphous cell modules in Materials Studio software are used to construct calcium carbonate aqueous solution. The calcium carbonate aqueous solution system constructed in this study contains 300 water molecules, 3 carbonate ions and 3 calcium ions. The volume of the system is 19.5172×19.5172×24.8605 Å 3 . To compare the interaction between calcium ion and carbonate ion with polyvinyl chloride layer, the two ions are placed at the bottom of the solution near PVC. After the completion of the construction, it is also optimized by molecular mechanics and the final interface model of water solution and PVC layer is shown in Fig. 3b.
Determination of Simulation Parameters
The discovery module suitable for Molecular Dynamics (MD) simulation in Materials Studio software is used. Using high precision universal compass force field, all atom coordinates of PVC layer are fixed. Since the pressure of the system is not the key factor, the canonical ensemble (NVT) is adopted. Firstly, the NVT ensemble and velocity scale temperature control method are used to simulate the dynamics of the system at 100 ps to make the system reach the equilibrium state. Secondly, the kinetic simulation is carried out in the NVT ensemble and Andersen constant temperature hot bath. The time step of the process is 1 fs and the simulation time is 200 ps. The trajectory of the system is recorded once every 10000 steps and the simulated temperature is 283-308 k. The simulation is conducted once every 5K and the truncation radius is 9.5 Å.
Determination of System Equilibrium
The system equilibrium is judged by temperature and energy balance. The accuracy of the simulation is characterized by observing the energy convergence parameter, the ratio of total energy fluctuation value and kinetic energy fluctuation value (R). When the energy convergence parameter ≤0.001, R≤0.001, the calculation results are reliable. The equilibrium process of the system less than 298 K is illustrated as an example. Figure 4a shows the energy output curve of the equilibrium process and Fig. 4b shows the temperature output curve of the equilibrium process. It can be seen from the energy curve that the change of potential energy and non-bond energy of the system with time has been flattened, indicating that the energy of the system has reached the equilibrium state. It can be seen from the temperature curve that the temperature fluctuates 10% above and below 298 K, indicating that the system temperature has reached the equilibrium state and the determination of equilibrium at other temperatures is also based on this method.
Analysis of Self-Diffusion Coefficient
To further study the dynamic characteristics of the system, the self-diffusion coefficients of the system at different temperatures are analyzed and calculated. The self-diffusion coefficient D is the physical quantity measuring the motion of a single colloidal particle surrounded by the same uniformly distributed particles (Tian, 2008). There are many methods to calculate the self-diffusion coefficient. In this study, the selfdiffusion coefficient of the particle is calculated according to the mean square displacement of the particle. Based on the mean square displacement expression and Einstein's diffusion law, the value of diffusion coefficient D can be obtained. Figure 5 shows the mean square displacement curves of water molecules at different temperatures obtained by molecular dynamics simulation at 283k-308k. When it is less than 10ps, it is a transition period from an unstable state to a stable state and the mean square displacement curve after 10ps is an approximately straight line. It can be found that with the increase of temperature, the slope of the mean square displacement curve increases, thus the self-diffusion coefficient of water molecules also increases. This is because the kinetic energy of water molecules increases with the increase of temperature and the hydrogen bonds between some water molecules are destroyed, resulting in an increase in the number of free single water molecules. Thus the self-diffusion coefficient of water molecules is increased, that is, the activity of water molecules is enhanced. At the same time, with the increase of the activity of water molecules, the contact probability of calcium ions and carbonate ions will be reduced. Therefore, the increase of temperature will inhibit the crystal growth to a certain extent. In the same way, the self-diffusion coefficients of calcium ion and carbonate ion can be obtained as shown in Fig. 6. In the temperature range of 283-308k, the selfdiffusion coefficients of calcium ion and carbonate first decrease, then increase and decrease again. The selfdiffusion coefficients of calcium ion and carbonate ion change obviously at different temperatures, which indicates that temperature has a great influence on the self-diffusion coefficient of calcium ion and carbonate ion. When t = 303 k, the self-diffusion coefficients of calcium ion and carbonate ion reach the maximum value. At this time, the activity of the two ions is large and they are easy to react to form ion pairs and crystals under certain conditions. Moreover, it can be seen that the temperature is an inflection point. If it is higher than this temperature, the diffusion coefficient tends to decrease. The simulation results of the self-diffusion coefficient in the temperature of 283-308 k are similar to those obtained in other references (Wang, 2010;Wang et al., 2010), which verifies the correctness of the results.
Analysis of Interaction between Systems
The interaction between PVC and scaling ion solution system is analyzed by simulation. If the interaction is very strong, the scaling ion solution system is easy to adhere to the PVC layer and the pipe wall is easy to crystallize. To calculate the adsorption energy of PVC layer, the interaction energy of the system is defined as ΔE and the binding energy Ebinding as the opposite number of interaction energy ΔE. Details as follows: In Eq. (1), Etotal is the total energy of system, Ep is the single point energy of PVC layer and Eo is the single point energy of calcium ion, carbonate ion and water molecule after interaction (Tian, 2008). The calculated energy values of the system under different temperatures are shown in Table 1. The change trend of binding energy with temperature is shown in Fig. 7.
It can be seen from Table 1 that the interaction energy is negative, which means that the adsorption of scaling ion solution system on PVC surface is a spontaneous process and a relatively stable system can be formed. Compared with (Tian, 2008), the adsorption energy of PVC for calcium ion and carbonate ion system is far less than that of calcite crystal surface. Figure 6 is a graph of the binding energy between the ionic solution system and PVC changing with temperature. Combined with Table 1 and Fig. 6, it can be seen that within the range of simulated temperature, the binding energy between scale ion solution system and PVC increases at first and then decreases with the increase of temperature. When the temperature T = 283 K, the binding energy is the minimum and Ca 2+ and CO3 2are the most difficult to deposit and crystallize on the surface of PVC pipe wall. When temperature T = 298 K, the binding energy between ionic solution system and PVC reaches the maximum and Ca 2+ and CO3 2are easy to deposit on the surface of PVC pipe wall.
Conclusion
A two-layer model system of PVC layer and aqueous solution containing calcium ion and carbonate ion was established and molecular dynamics simulation of the system at different temperatures was carried out. The interaction energy, self-diffusion coefficient and interaction relationship between PVC layer and aqueous solution system were analyzed. It can be concluded that: (1) With the increase of temperature, the self-diffusion coefficient of water molecules increases. The increased activity of water molecules will reduce the contact probability of calcium ions and carbonate ions. Therefore, the increase of temperature will inhibit the crystal growth to a certain extent (2) In the temperature range of 283-308k, the selfdiffusion coefficients of calcium ion and carbonate first decrease, then increase and decrease again. When T = 303K, the self-diffusion coefficient of calcium ion and carbonate ion reaches the maximum value. At this time, the activity of the two ions is relatively large and it is easy to react to form ion pairs and crystals under certain conditions (3) The adsorption energy of PVC for calcium ion and carbonate ion solution system is much less than that of calcite crystal surface. In a certain temperature range, the binding energy between the ionic solution system and PVC increased first and then decreased with the increase of temperature When the temperature T = 283k, the binding energy is the minimum and Ca 2+ and CO3 2are the most difficult to deposit and crsystallize on the surface of PVC pipe wall. When temperature T = 298K, the binding energy between ionic solution system and PVC reaches the maximum and Ca 2+ and CO3 2are easy to deposit on the surface of PVC pipe wall.
Due to the consideration of the accuracy and calculation time of the model results in this study, the length of the degree of polymerization and the molecular concentration of calcium carbonate solution are limited. In the future, we will conduct further in-depth research on this problem, so that molecular dynamics simulation in this field can more truly approximate the field situation. | 3,612 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Deep Learning-Based Technique for Remote Sensing Image Enhancement Using Multiscale Feature Fusion
The present study proposes a novel deep-learning model for remote sensing image enhancement. It maintains image details while enhancing brightness in the feature extraction module. An improved hierarchical model named Global Spatial Attention Network (GSA-Net), based on U-Net for image enhancement, is proposed to improve the model’s performance. To circumvent the issue of insufficient sample data, gamma correction is applied to create low-light images, which are then used as training examples. A loss function is constructed using the Structural Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) indices. The GSA-Net network and loss function are utilized to restore images obtained via low-light remote sensing. This proposed method was tested on the Northwestern Polytechnical University Very-High-Resolution 10 (NWPU VHR-10) dataset, and its overall superiority was demonstrated in comparison with other state-of-the-art algorithms using various objective assessment indicators, such as PSNR, SSIM, and Learned Perceptual Image Patch Similarity (LPIPS). Furthermore, in high-level visual tasks such as object detection, this novel method provides better remote sensing images with distinct details and higher contrast than the competing methods.
Introduction
The military, earth sciences, agriculture, and astronomy industries are experiencing a surge in demand for high-quality remote sensing images.Nonetheless, less-than-ideal environmental circumstances reduce the brightness and sequester the critical elements in remote sensing images.Since brightness is a major quality component in remote sensing photos, low-light enhancement techniques are required for improved information representation and visual perception [1].
The two image-enhancement techniques are image spatial domain and transform domain methods.Traditional histogram equalization [2] is the most popular image spatial domain algorithm due to its simplicity and efficiency.However, the primary disadvantage of histogram equalization is that if the histogram contains peaks, the generated results are enhanced, resulting in saturation issues and a highly sharpened image.To overcome this issue, histogram-based methods in the spatial domain of the image, such as dynamic histogram equalization [3] and a histogram modification framework [4], have been proposed.Although both can prevent over-enhancement, the details are not emphasized because these methods preserve the input histogram.Recently, adaptive gamma correction with weighted distribution (AGCWD) [5] has been proposed as a new contrast enhancement technique, which produces similar results and may also cause a loss of detail in light areas and an increase in saturation.A previous study proposed a two-dimensional (2D) histogram that uses contextual information to improve the contrast of the input image [6][7][8].However, generating a two-dimensional histogram has a high computational cost and is not suitable for many practical applications.
•
Depthwise separable convolution is a lightweight convolution operation that significantly reduces the number of parameters and computations.Herein, we propose replacing the ordinary convolution in GSA-Net with depthwise separable convolution, reducing the number of parameters from 29.86 M to 7.06 M (a reduction of about 76%).• A global attention module is introduced to weaken the noise response and integrate local information.Specifically, the global attention mechanism replaces the convolution layers of U-Net and is embedded into the network backbone.• We propose an improved loss function that combines the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) quotient to avoid the model optimization direction deviation and gradient diffusion.This loss function guides the network to train and improve the convergence of the model.• The proposed model is evaluated based on a synthesized low-light image enhancement dataset, and the results demonstrate that it achieves state-of-the-art performance in image enhancement.Moreover, we facilitate object detection on the enhanced images, which has positive implications for remote sensing images.
Related Studies 2.1. Data Augmentation
With the widespread application of deep learning in computer vision, the diversity of datasets is crucial for the performance of algorithms.In order to enhance the dataset used in our study, we employ Gamma correction as an effective data augmentation technique.The gamma correction possesses unique advantages in adjusting the brightness and contrast of images to augment the dataset for improved training and evaluation of our model.
Gamma transformation is a non-linear process that enhances or suppresses different intensity regions of an image, especially under low-light conditions, to improve the visibility of the details.While determining the parameter values for gamma transformation, we consider the specific requirements for simulating low-light synthetic images.Small gamma values (<1) enhance the details in darker regions, making them distinct, a critical aspect for simulating images in low-light conditions.In addition, we ensure that the gamma value range includes one to preserve the original brightness of the image.Larger gamma values (>1) are applied to suppress the brighter areas of the image, preventing excessive amplification and achieving visual balance.This customized selection of gamma transformation and parameter values caters to the specific demands of simulating low-light synthetic images, enhancing image quality and adaptability in various scenarios.The luminance of the transformed image decreases with the increasing gamma value.Changing the gamma value can affect the quality of remote sensing images.A gamma value of four is used in this study to produce a darker image.The transformed V channel image is then merged back into the V channel of the original image, and the modified image is converted back to RGB space to produce a darkened remote sensing image.The NWPU VHR-10 dataset comprises 650 remote sensing images captured under normal lighting conditions with high contrast and categorized into ten classes.Figure 1 illustrates the NWPU VHR-10 dataset.To enhance the dataset diversity and simulate challenging scenarios, seven sets of parameters are chosen randomly, which generate seven sets of remotely sensed images under weak illumination conditions.Ultimately, this augmentation process yields a comprehensive training set consisting of 4550 images.Consequently, 700 synthetic images are created by combining 100 images with typical lighting that comprise the test set, while the remaining images provide the training data.This method darkens the remote sensing images, providing several low-light images.The transformation formula for gamma is as follows: when γ > 1, gamma transformation can be used to darken an image, which facilitates the generation of additional data for a specific dataset.In this study, we present examples where α = 1 and the γ values are 1.5 and 4. Figure 2 illustrates an example of a synthesized low-light image.U-Net is a convolutional neural network proposed for biomedical image segmentation tasks [19][20][21][22].The term "U-Net" is derived from the network's structure, which resembles the letter "U".It uses a symmetric encoder-decoder structure and skip connections [23] in the decoder part to merge the feature information extracted from the encoder with that extracted from the decoder, thereby enhancing the reconstructed input image.The encoder component of U-Net comprises convolutional layers, pooling layers, and activation functions, which are used to extract features from the input image.The decoder's reconstruction layers, skip connections, transformative layers, and activation functions are used to reconstruct the output image.
In the field of image enhancement, U-Net is frequently employed to convert lowquality input images to high-quality output images.Specifically, the input image serves as the network's input, whereas the output image is the reconstructed image produced by the network.The U-Net network autonomously learns to convert low-quality images into high-quality ones using the training dataset.Furthermore, U-Net enhances the remote sensing image characteristics, such as contrast, sharpness, and details.Training the U-Net network is used to obtain an effective model for enhancing the quality of remote sensing images and their practical applications.
U-Net
U-Net is a convolutional neural network proposed for biomedical image segmentation tasks [19][20][21][22].The term "U-Net" is derived from the network's structure, which resembles the letter "U".It uses a symmetric encoder-decoder structure and skip connections [23] in the decoder part to merge the feature information extracted from the encoder with that extracted from the decoder, thereby enhancing the reconstructed input image.The encoder component of U-Net comprises convolutional layers, pooling layers, and activation functions, which are used to extract features from the input image.The decoder's reconstruction layers, skip connections, transformative layers, and activation functions are used to reconstruct the output image.
In the field of image enhancement, U-Net is frequently employed to convert lowquality input images to high-quality output images.Specifically, the input image serves as the network's input, whereas the output image is the reconstructed image produced by the network.The U-Net network autonomously learns to convert low-quality images into high-quality ones using the training dataset.Furthermore, U-Net enhances the remote sensing image characteristics, such as contrast, sharpness, and details.Training the U-Net network is used to obtain an effective model for enhancing the quality of remote sensing images and their practical applications.
U-Net
U-Net is a convolutional neural network proposed for biomedical image segmentation tasks [19][20][21][22].The term "U-Net" is derived from the network's structure, which resembles the letter "U".It uses a symmetric encoder-decoder structure and skip connections [23] in the decoder part to merge the feature information extracted from the encoder with that extracted from the decoder, thereby enhancing the reconstructed input image.The encoder component of U-Net comprises convolutional layers, pooling layers, and activation functions, which are used to extract features from the input image.The decoder's reconstruction layers, skip connections, transformative layers, and activation functions are used to reconstruct the output image.
In the field of image enhancement, U-Net is frequently employed to convert lowquality input images to high-quality output images.Specifically, the input image serves as the network's input, whereas the output image is the reconstructed image produced by the network.The U-Net network autonomously learns to convert low-quality images into high-quality ones using the training dataset.Furthermore, U-Net enhances the remote sensing image characteristics, such as contrast, sharpness, and details.Training the U-Net network is used to obtain an effective model for enhancing the quality of remote sensing images and their practical applications.
U-Net
The technological improvements in remote sensing image enhancement are vital, which renders their real-world significance apparent when applied to target detection scenarios.The incorporation of U-Net architecture into the target detection framework yielded a 15% increase in detection accuracy using a dataset of aerial surveillance images.This example illustrates the efficiency of our proposed enhancements in refining the identification and localization of specific targets amidst complex and cluttered scenarios, with potential implications for security and environmental monitoring.
Proposed Method
Herein, we have introduced the model's primary architecture, followed by an analysis of the function of each module and the loss function's derivation process.
GSA-Net
The multiresolution feature extractor, picture texture reconstruction layer, and feature fusion module constitute the majority of the network.The downsampling [24] and channel attention modules extract spatial details and semantic information and are the components of the multiresolution feature extractor.Multiple Global Spatial Attention (GSA) modules make up the image texture reconstruction layer, which directs the network to recover the details of the image texture.The feature fusion module [19] functions at the network's end to aggregate the features from various levels in a multidirectional manner and bridge the semantic gap caused by various stages and scales.
The size of the input data for GSA-Net is not rigidly constrained and is typically determined by the characteristics of the task and dataset.It is commonly set as H × W, where H represents the image height and W is the width.
Firstly, for the encoder (downsampling path) of GSA-Net, we employ 3 × 3 depthwise separable convolutions to downsample the low-light remote sensing images at the original resolution.Each layer is equipped with GSA to extract comprehensive and rich semantic information.Further details about the GSA block will be elaborated in Section 3.2.Additionally, in the GSA-Net, for the main feature path at the end of the GSA, we utilize the Pixel (Un) Shuffle method as a downsampling module.Subsequently, these feature maps are concatenated with the shallow feature maps obtained from the previous downsampling, and the regular U-Net process is continued.Each downsampling module outputs feature maps with a size of H 2 N × W 2 N , where N is the number of downsampling modules.The middle layers, serving as connecting components between the encoder and the decoder, do not induce significant size changes.In the decoder (upsampling path), each upsampling module increases the size of the output feature maps to H × 2 N × W × 2 N through upsam- pling and convolution operations, where N is the number of upsampling modules.Finally, the SKFF (Selective Kernel Feature Fusion) method is employed to consolidate information in the decoder (reconstruction process).Figure 3 illustrates the structure of GSA-Net.
GSA Block
As shown in Figure 4, the top-level subnetwork employs GSA blocks to capture global information, including two depthwise separable convolutions (DSCs) + PReLU layers and AdaptiveAvgPool2d, AdaptiveMaxPool2d, interpolation (Resize block in Figure 3), and a Spatial Attention (SPA) module.Specifically, based on the input feature map X with size H × W × C. AdaptiveAvgPool2d and AdaptiveMaxPool2d are used to extract representative information, resulting in an output feature map with dimensions H1 × W1 × C.Then, the image with global information is upscaled using an interpolation function, followed by Conv + PReLU processing to reduce the channel number, resulting in a global feature map with a size of H × W × C1.Subsequently, we apply the SPA block to enhance the attention in different regions in the global feature map.The block also applies both max-pooling and average-pooling in a channel-wise size, and then the two feature maps are subtracted to generate a feature descriptor and highlight the informative regions.Finally, the input feature map (encoding local information) and the optimized global feature map (encoding global information) are combined using the DSC + PReLU function, resulting in an output feature map with a size of H × W × C.
Global Spatial Attention(GSA) GSA-Net is distinguished from other methods based on its unique advantages, which are manifested in adopting depthwise separable convolution for a lightweight design and integrating a global attention module.These features reduce the parameter count and enhance local and global image information fusion.However, the network still faces challenges such as computational complexity in large-scale image data, reliance on specific datasets, and sensitivity to hyperparameter choices.To address these limitations, the present study improved the loss function by incorporating an optimization approach using a combination of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).This targeted optimization enhances the model's convergence, thereby overcoming the constraints and ensuring a robust and reliable performance of GSA-Net in practical applications.
GSA Block
As shown in Figure 4, the top-level subnetwork employs GSA blocks to capture global information, including two depthwise separable convolutions (DSCs) + PReLU layers and AdaptiveAvgPool2d, AdaptiveMaxPool2d, interpolation (Resize block in Figure 3), and a Spatial Attention (SPA) module.Specifically, based on the input feature map X with size H × W × C. AdaptiveAvgPool2d and AdaptiveMaxPool2d are used to extract representative information, resulting in an output feature map with dimensions H1 × W1 × C.Then, the image with global information is upscaled using an interpolation function, followed by Conv + PReLU processing to reduce the channel number, resulting in a global feature map with a size of H × W × C 1 .Subsequently, we apply the SPA block to enhance the attention in different regions in the global feature map.The block also applies both max-pooling and average-pooling in a channel-wise size, and then the two feature maps are subtracted to generate a feature descriptor and highlight the informative regions.Finally, the input feature map (encoding local information) and the optimized global feature map (encoding global information) are combined using the DSC + PReLU function, resulting in an output feature map with a size of H × W × C.
GSA Block
As shown in Figure 4, the top-level subnetwork employs GSA blocks to capture global information, including two depthwise separable convolutions (DSCs) + PReLU layers and AdaptiveAvgPool2d, AdaptiveMaxPool2d, interpolation (Resize block in Figure 3), and a Spatial Attention (SPA) module.Specifically, based on the input feature map X with size H × W × C. AdaptiveAvgPool2d and AdaptiveMaxPool2d are used to extract representative information, resulting in an output feature map with dimensions H1 × W1 × C.Then, the image with global information is upscaled using an interpolation function, followed by Conv + PReLU processing to reduce the channel number, resulting in a global feature map with a size of H × W × C1.Subsequently, we apply the SPA block to enhance the attention in different regions in the global feature map.The block also applies both max-pooling and average-pooling in a channel-wise size, and then the two feature maps are subtracted to generate a feature descriptor and highlight the informative regions.Finally, the input feature map (encoding local information) and the optimized global feature map (encoding global information) are combined using the DSC + PReLU function, resulting in an output feature map with a size of H × W × C.
DSC
DSC is a lightweight convolution operation [25] that splits the traditional operation into depthwise and pointwise convolutions, significantly reducing the number of parameters and computations.The key distinction between depthwise convolution [26] and pointwise depthwise convolution [27] lies in their operational methodologies.Depthwise convolution operates independently on each channel, whereas pointwise depthwise convolution performs linear combinations across channels.Depthwise Separable Convolution (DSC) amalgamates these operations by first employing depthwise convolution, followed by pointwise convolution for inter-channel mixing.Herein, we incorporated DSC into the GSA-Net network for model lightweighting.
In standard convolution, the number of parameters is determined by the size of the convolutional kernel and the number of input channels.In contrast, Depthwise Separable Convolution (DSC) focuses solely on each input channel during depthwise convolution, resulting in smaller convolutional kernel sizes.Pointwise convolution subsequently linearly combines channels through element-wise operations.This design effectively reduces the number of parameters on each channel, and the standard convolution operation with DSC is supplanted, resulting in a 76% reduction in model parameters.This feature enhances the model's computational efficiency but reduces its storage space requirements, rendering it suitable for scenarios with limited resources.
For optimal performance and precision of DSC, selecting the appropriate kernel sizes and numbers for depthwise and pointwise convolutions is essential.In addition, enhanced DSC operations, such as group and deformable pointwise convolutions, can improve the model's accuracy.Furthermore, DSC reduces the number of parameters and computations while maintaining model accuracy.The depthwise separable convolution employed for model lightweighting is depicted in Figure 5.
feature enhances the model's computational efficiency but reduces its storage space requirements, rendering it suitable for scenarios with limited resources.
For optimal performance and precision of DSC, selecting the appropriate kernel sizes and numbers for depthwise and pointwise convolutions is essential.In addition, enhanced DSC operations, such as group and deformable pointwise convolutions, can improve the model's accuracy.
Furthermore, DSC reduces the number of parameters and computations while maintaining model accuracy.The depthwise separable convolution employed for model lightweighting is depicted in Figure 5.
SKFF Module
The SKFF module dynamically adjusts the receptive field by two operations, Fuse and Select, as illustrated in Figure 6.Fuse combines the information from multiresolution streams to generate global feature descriptors, while Select uses these descriptors to recalibrate and aggregate the feature map.Specifically, the three branch streams in this study are as follows:
SKFF Module
The SKFF module dynamically adjusts the receptive field by two operations, Fuse and Select, as illustrated in Figure 6.Fuse combines the information from multiresolution streams to generate global feature descriptors, while Select uses these descriptors to recalibrate and aggregate the feature map.Specifically, the three branch streams in this study are as follows: Sensors 2024, 24, x FOR PEER REVIEW 8 of 17 Fuse: The theoretical foundation behind the selection of three parallel convolution streams is rooted in the demand for multiscale information fusion to enhance the model's perceptual capabilities of different scale features.The design aims to introduce convolution streams with distinct receptive fields to capture multiscale contextual information from the input data.Each parallel convolution stream is dedicated to extracting features of specific scales, ensuring the model comprehensively understands the multiscale characteristics of the data.It receives input from three parallel convolution streams and combines the multiscale features through element-wise summation: Channel ∈ 1×1× is computed using global average pooling (GAP) on the ∈ ×× dimension.Next, a tight feature representation ∈ 1×1× is generated using the channel-downscaling Conv layer, where = 8 . Finally, the feature vector z passes through three parallel channel upsampling layers, providing three feature descriptors: 1 , 2 , 3 ∈ 1×1× .
Select: The SoftMax function is applied on 1 , 2 , and 3 to generate attention activations 1 , 2 , and 3 , which are used to recalibrate the multiscale feature maps 1 , 2 , and 3 .The process of feature recalibration and aggregation is defined as follows: The SKFF uses six times fewer parameters than aggregating features through concatenation while still producing better results.The specific structure of the SKFF module is shown in Figure 6.
Loss Function
The goal of training the GSA-Net model is to infer the mapping correlation between low-light image X and normal-light image Y, such that the low-light image can be enhanced to resemble the normal-light image.Currently, the mean squared error (MSE) loss function [13] and the mean absolute error (MAE) loss function [28] are the predominant loss functions that measure the error between corresponding pixels in the field of computer vision.The MSE is susceptible to outliers, resulting in over-constraint [29], whereas MAE lacks gradient constraints [30], resulting in weak model convergence.Some studies have proposed a structural similarity index (SSIM) loss function based on human visual perception [31],which optimizes the model according to visual sensory direction; however, the illuminance and color restoration results are unsatisfactory.
The central concept underlying SSIM is the incorporation of subjective human perception.SSIM evaluates three variables: (1) distortion is less apparent in very bright regions (luminance); (2) it is less apparent in areas with complex textures (contrast); (3) Fuse: The theoretical foundation behind the selection of three parallel convolution streams is rooted in the demand for multiscale information fusion to enhance the model's perceptual capabilities of different scale features.The design aims to introduce convolution streams with distinct receptive fields to capture multiscale contextual information from the input data.Each parallel convolution stream is dedicated to extracting features of specific scales, ensuring the model comprehensively understands the multiscale characteristics of the data.It receives input from three parallel convolution streams and combines the multiscale features through element-wise summation: Sensors 2024, 24, 673 8 of 16 Channel s ∈ R 1×1×C is computed using global average pooling (GAP) on the L ∈ R H×W×C dimension.Next, a tight feature representation z ∈ R 1×1×r is generated using the channeldownscaling Conv layer, where r = C 8 .Finally, the feature vector z passes through three parallel channel upsampling layers, providing three feature descriptors: Select: The SoftMax function is applied on v 1 , v 2 , and v 3 to generate attention activations S 1 , S 2 , and S 3 , which are used to recalibrate the multiscale feature maps L 1 , L 2 , and L 3 .The process of feature recalibration and aggregation is defined as follows: The SKFF uses six times fewer parameters than aggregating features through concatenation while still producing better results.The specific structure of the SKFF module is shown in Figure 6.
Loss Function
The goal of training the GSA-Net model is to infer the mapping correlation between low-light image X and normal-light image Y, such that the low-light image can be enhanced to resemble the normal-light image.Currently, the mean squared error (MSE) loss function [13] and the mean absolute error (MAE) loss function [28] are the predominant loss functions that measure the error between corresponding pixels in the field of computer vision.The MSE is susceptible to outliers, resulting in over-constraint [29], whereas MAE lacks gradient constraints [30], resulting in weak model convergence.Some studies have proposed a structural similarity index (SSIM) loss function based on human visual perception [31], which optimizes the model according to visual sensory direction; however, the illuminance and color restoration results are unsatisfactory.
The central concept underlying SSIM is the incorporation of subjective human perception.SSIM evaluates three variables: (1) distortion is less apparent in very bright regions (luminance); (2) it is less apparent in areas with complex textures (contrast); (3) adjacent pixels form a structure in space that is highly sensitive to the human eye (structure).Consequently, SSIM assesses the three variables mentioned above using the following formulas: In the above formula, µ represents the mean, σ represents the variance, and C 1 , C 2 , and C 3 are constants used to fine-tune SSIM.Moreover, C 1 , C 2 , and C 3 satisfy the following correlation: In Equation ( 7), L represents the dynamic range of the pixels.For an 8-bit grayscale image, L = 256.K1 and K2 are <1 and are typically set to 0.01 and 0.03, respectively.Therefore, the formula for calculating SSIM is as follows: The three factors in Equation ( 8) have similar effects on subjective perception, and therefore, α = β = γ = 1.The following equation is used to calculate SSIM: Sensors 2024, 24, 673 9 of 16 PSNR is based on a direct comparison of the differences between pixels.The first step is to calculate the MSE of all pixels in two images: Taking the logarithm of the result yields the PSNR, which is calculated as follows: In order to constrain the training process, accelerate the convergence speed of the model, and improve the visual quality of the enhanced image, we take into account the characteristics of the aforementioned loss functions.The weighted part of the proposed loss function (Fan et al. 2022) [32] is removed, and that which consists of the quotient of PSNR and SSIM to predict the error between low-light images and normal images is used, according to the following equation: In the equation, X and Y represent the samples, and ω is a constant that is usually set to 0.005.This equation avoids the small value of the initial training PSNR, which leads to gradient vanishing or explosion while not introducing additional parameters.The combination of PSNR and SSIM as a loss function exhibits satisfactory general performance and can be extended to similar fields, such as image restoration and denoising.
Experimental Design
To validate the feasibility of the proposed algorithm, we employ the NWPU VHR-10 dataset (Huang et al.) [5] based on the experimental environment shown in Table 1.The Adam optimizer is adopted, with an initial learning rate of 0.0002 and a learning rate decay of 0.00001 after each epoch.The training process is terminated when the learning rate decreases to 0.00001, and the total number of iterations is set to 200.Also, a comparative analysis with state-of-the-art algorithms was conducted in recent years.
Dataset
NWPU VHR-10 is a geospatial remote sensing dataset consisting of 650 images with objects and 150 background images, with a total of 800 images, for object detection.The dataset comprises ten object categories: airplanes, ships, oil tanks, baseball fields, tennis courts, basketball courts, athletic fields, harbors, bridges, and cars.Given the small size of the NWPU VHR-10 dataset, the images are converted to the Hue Saturation Value (HSV) color space to mitigate the overfitting problem during the training process and enhance the model's generalization ability.The V channel image is gamma-transformed to produce a composite low-light image V ark : v dark = αV γ , where α ∈ (0.8, 1),γ ∈ (1.3, 5).Then, the V channel of the image is replaced with v dark , while the other two channels remain unchanged.The image is subsequently converted back into the Red-Green-Blue (RGB) color space to generate the composite low-light image.For each normal-light image, seven sets of parameters are randomly selected to create seven low-light images, resulting in a total of 4550 training images.Of these, 700 composite images including 100 normal-light images comprise the test set, and the remainder constitute the training set.Table 2 presents seven randomly chosen examples of parameters.
Evaluation Metrics
The present study employs six evaluation metrics as criteria to quantitatively evaluate the performance of the proposed low-light image enhancement algorithm.The paper incorporates various metrics to evaluate image generation or processing tasks.These metrics include PSNR, SSIM, SNR, normalized mutual information (NMI) (Studholme et al. 1999) [32], learned perceptual image patch similarity (LPIPS) (Zhang et al. 2018) [33], and normalized root mean square error (NRMSE) (Hyndman et al. 2006) [34].This ensemble of metrics forms a comprehensive set of performance measures.Typically, the numerical range for PSNR and SNR is typically between zero and positive infinity, with higher values indicating better image quality.SSIM values typically range from −1 to 1, with values closer to 1 indicating high image quality.NMI values range from 0 to 1, with higher values indicating better image similarity.LPIPS and NRMSE values range from 0 to positive infinity, with lower values indicating better image quality.PSNR reflects the level of image distortion, SSIM measures the similarity between two images, SNR indicates the SNR in the image, NMI reflects the correlation between images, NRMSE measures the error between images based on pixel values, and LPIPS measures the perceptual similarity between images.PSNR, SNR, SSIM, and LPIPS are evaluated according to the following formulas: wherein MSE stands for mean squared error, while M and N represent the width and height of the image, respectively.Moreover, i and j denote the horizontal and vertical coordinates of a pixel; x and y refer to the sample and the label, respectively.
x and y represent the sample and label, respectively.Moreover, µ x and µ y denote the means of x and y and σ x and σ y denote their variances.σ xy represents the covariance be-tween x and y, while C 1 and C 2 are constants that prevent the denominator from becoming too small and resulting in unstable outcomes.
In Equation (17), d(x, x 0 ) represents the distance between image patch x and x 0 , W l is the image feature vector, and l is the layer number.H and W denote the height and width of the image, respectively.ŷl and ŷl 0 , respectively, represent the normalized values of the feature stack and channel unit of the l th layer.
Qualitative Analysis of Experimental Results
To assess the effectiveness of our low-light image enhancement algorithm, we conduct a comparative study with classical and efficient traditional techniques.These techniques include LIME (Guo et al. 2016) [35] and CLAHE (Yadav et al. 2014) [36].Additionally, we compare our approach with representative deep learning algorithms, such as SCI (Ma et al. 2022) [37], RRDNet (Zhu et al. 2020) [38], LLFlow (Wang et al. 2022) [39], MIRNet (Zamir et al. 2020) [38], and Zero-DCE (Guo et al. 2020) [40].Zero-DCE takes the image as input and generates a high-order curve, while SCI achieves self-calibrated illumination learning through weight sharing.Our proposed GSA-Net is also included in the comparison.This method simplifies the design of network structures and enhances images with basic operations.RRDNet is a three-branch convolutional neural network that decomposes input images into illumination, reflection, and noise components, estimates noise accurately, and restores illumination by iteratively predicting loss for denoising.Each algorithm is evaluated in the same experimental setting.
All the algorithms above can address the issue of insufficient illumination in lowlight images, and most of the enhancement algorithms can restore object contours and colors effectively.However, the CLAHE algorithm may cause color distortion in the enhanced image, whereas the LIME algorithm may result in uneven color restoration and chaotic hues.Although the RRDNet and Zero-DCE algorithms produce superior visual effects overall compared to the previous two, they are insufficient at reducing noise and artifacts.The output image of the SCI algorithm has insufficient color constraints, resulting in an effect similar to that of a foggy image.The LLFlow and MIRNet algorithms show limited effectiveness in enhancing low-light remote sensing images.Conversely, our proposed algorithm generates augmented images with uniform illumination and contrast, suppressing artifacts and noise and achieving superior subjective visual results compared to other algorithms.Moreover, the comparison of enlarged details in the lower-left corner of the figure reveals higher and more realistic color restoration and precision of the proposed algorithm (Figure 7).
Quantitative Analysis of Experimental Results
To further validate the advanced performance of the proposed model, Table 3 provides quantitative standards for the algorithm described above; the optimal values are highlighted.The GSA-Net's aggregate performance indicators are PSNR, SSIM, SNR, NMI, LPIPS, and NRMSE at 30.110, 0.863, 24.361, 0.833, 0.172, and 0.232, respectively.The results demonstrate that the proposed model has significant advantages over conventional algorithms LIME and CLAHE.The SNR and NMI are enhanced by 23.9% and 14.0%, respectively, compared to the traditional deep learning algorithm Zero-DCE.Compared to the LLFlow and MIRNet algorithms, the PSNR improves by 28% and 23.7%, respectively.In addition, the proposed model outperforms the state-of-the-art algorithm RRDNet in terms of SNR and NMI by 30.6% and 17.0%, respectively.The SCI algorithm's NMI index is similar to that of GSA-Net, while the remaining indicators are inferior to those of the proposed model.In addition, both LPIPS and NRMSE of the proposed model provide the best results, significantly outperforming competing algorithms, indicating that GSA-Net is capable of learning features that conform to visual patterns.In conclusion, the proposed model achieves low-light remote sensing image enhancement from multiple perspectives and levels with an outstanding performance.
artifacts.The output image of the SCI algorithm has insufficient color constraints, resulting in an effect similar to that of a foggy image.The LLFlow and MIRNet algorithms show limited effectiveness in enhancing low-light remote sensing images.Conversely, our proposed algorithm generates augmented images with uniform illumination and contrast, suppressing artifacts and noise and achieving superior subjective visual results compared to other algorithms.Moreover, the comparison of enlarged details in the lower-left corner of the figure reveals higher and more realistic color restoration and precision of the proposed algorithm (Figure 7).
Quantitative Analysis of Experimental Results
To further validate the advanced performance of the proposed model, Table 3 provides quantitative standards for the algorithm described above; the optimal values are highlighted.The GSA-Net's aggregate performance indicators are PSNR, SSIM, SNR, NMI, LPIPS, and NRMSE at 30.110, 0.863, 24.361, 0.833, 0.172, and 0.232, respectively.The results demonstrate that the proposed model has significant advantages over conventional algorithms LIME and CLAHE.The SNR and NMI are enhanced by 23.9% and 14.0%, respectively, compared to the traditional deep learning algorithm Zero-DCE.Compared to the LLFlow and MIRNet algorithms, the PSNR improves by 28% and 23.7%, respectively.In addition, the proposed model outperforms the state-of-the-art algorithm RRDNet in terms of SNR and NMI by 30.6% and 17.0%, respectively.The SCI algorithm's NMI index is similar to that of GSA-Net, while the remaining indicators are inferior to those of the proposed model.In addition, both LPIPS and NRMSE of the proposed model provide the best results, significantly outperforming competing algorithms, indicating that GSA-Net is capable of learning features that conform to visual patterns.In conclusion,
Loss Experiment
In order to further establish the superiority of the proposed model, the MSE, MAE, SSIM, the improved MAE loss function (Charbonnier [41]), and the proposed improved loss function based on the GSA-Net model were evaluated using the NWPU VHR-10 dataset.The outcomes are presented in Table 3; the most significant outcomes are highlighted in bold.The MSE loss function aligns with the implicit information of the restored image owing to its simplicity, intuitiveness, and its effectiveness in preserving smoothness, coupled with its differentiability.However, its expression includes an exponent that emphasizes the outliers, resulting in poor network convergence.The MAE loss function depicts outliers and performs slightly better than MSE, but the discontinuous derivative of the analytical formula hinders the model's convergence ability.Charbonnier enhances the MAE by incorporating a constant to mitigate the gradient leap problem, substantially increasing the SNR index.The SSIM loss function simulates the updated gradient of the visual system, preserving the image texture details.While some indicators are markedly enhanced, the lack of sensitivity to the mean deviation of bright regions in an image results in undersaturated colors.The proposed loss function strikes an equilibrium between robustness and representability.Despite the fact that SNR and NRMSE indicators are slightly inferior, the other indicators outperform those of the comparative loss functions, confirming the improved prediction ability and robustness of the model to achieve the objective of estimating model bias (Table 4).
Ablation Experiment
This study analyzed the effect of GSA structure, SKFF, and DSC on remote sensing image enhancement performance by conducting super-resolution experiments on the NWPU VHR-10 dataset.Table 5 shows the PSNR and SSIM values, and the parameters of various model variants are compared according to the ablation experiment comparison results.When DSC is eliminated, the network's performance improves marginally, but the number of network parameters increases by 76%.The standard convolution is replaced with a DSC block to obtain a lighter model and improve deployment.The other two enhancement measures can increase the network's image enhancement performance, and their combined use yields the best results.
Case Study
To validate the applicability of our proposed algorithm, we investigate the issue concerning remote sensing object detection under low-light conditions.As shown in Figure 8, YOLOX (Ge et al. 2021) [42] is used to detect objects in low light and restore remote sensing images.The first row of the figure illustrates the detection of airplanes; three airplanes are not detected in the low-light remote sensing image, and a home is mistakenly identified as a ship compared to the restored image.The second row demonstrates the detection of ships and ports, and the house is incorrectly identified as ships and ports in the image on the left, whereas the augmented images are detected accurately.The third row demonstrates the detection of baseball and athletics fields; among these, the athletics field and three baseball fields are not detected in the low-light image on the left, and a house is incorrectly identified as a ship, whereas all detections in the restored image are accurate.These results illustrate the encouraging application potential of our proposed algorithm for object detection via remote sensing.added.
Case Study
To validate the applicability of our proposed algorithm, we investigate the issue concerning remote sensing object detection under low-light conditions.As shown in Figure 8, YOLOX (Ge et al. 2021) [42] is used to detect objects in low light and restore remote sensing images.The first row of the figure illustrates the detection of airplanes; three airplanes are not detected in the low-light remote sensing image, and a home is mistakenly identified as a ship compared to the restored image.The second row demonstrates the detection of ships and ports, and the house is incorrectly identified as ships and ports in the image on the left, whereas the augmented images are detected accurately.The third row demonstrates the detection of baseball and athletics fields; among these, the athletics field and three baseball fields are not detected in the low-light image on the left, and a house is incorrectly identified as a ship, whereas all detections in the restored image are accurate.These results illustrate the encouraging application potential of our proposed algorithm for object detection via remote sensing.
Conclusions
In this study, we proposed a multilevel feature fusion algorithm for improving remote sensing images captured in low-light conditions.Specifically, we employed conv+PReLU layers to generate varied inputs with diverse spatial resolutions and designed the GSA module to capture global data exhaustively.In addition, SKFF was embedded in the model to fuse all information effectively.To aid the network in learning the mapping correlation between purposeful images, we developed a combined loss function to improve the model's color recognition ability and enrich the color of the enhanced images.The experimental results of the analysis on the NWPU VHR-10 dataset demonstrated a superior subjective and global performance of our algorithm compared to the majority of advanced algorithms.The relatively high structural similarity index indicates the applicability of our remote sensing methodology.Thus, future research will concentrate on investigating lightweight models that decrease network space complexity while enhancing the visual effect of enhanced images.We will also strive to improve the existing models to handle various restoration tasks, including image denoising and deblurring.
Figure 4 .
Figure 4. Structure of the global attention mechanism.
Figure 4 .
Figure 4. Structure of the global attention mechanism.
Figure 5 .
Figure 5.The process of depthwise separable convolution for model lightweighting.
Figure 5 .
Figure 5.The process of depthwise separable convolution for model lightweighting.
Figure 8 .
Figure 8. Remote sensing image object detection test experiment.(a) Detection of targets in low-light remote sensing images; (b) object detection on restored remote sensing images in this study.
Table 2 .
Seven randomly selected parameter examples.
Table 3 .
The comparison results of different algorithms are presented.
Table 4 .
Evaluation results of different loss functions on the NWPU VHR-10 dataset.
Table 5 .
Comparison of results from ablation experiments.
"×" indicates that this module has not been added, while " √ " indicates that this module has been added. | 9,346.4 | 2024-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
GLOBAL STABILITY OF HOPFIELD NEURAL NETWORKS UNDER DYNAMICAL THRESHOLDS WITH DISTRIBUTED DELAYS
We study the dynamical behavior of a class of Hopfield neural networks with distributed delays under dynamical thresholds. Some new criteria ensuring the existence, uniqueness, and global asymptotic stability of equilibrium point are derived. In the results, we do not require the activation functions to satisfy the Lipschitz condition, and also not to be bounded, differentiable, or monotone nondecreasing. Moreover, the symmetry of the connection matrix is not also necessary. Thus, our results improve some previous works in the literature. These conditions have great importance in designs and applications of the global asymptotic stability for Hopfield neural networks involving distributed delays under dynamical thresholds.
Introduction
During the last 30 years, Hopfield neural networks (Hopfield [9]) have been extensively studied and developed and have found many applications in different areas such as pattern recognition, model identification, and optimization.Such applications heavily depend on the networks' dynamical behaviors.Therefore, the analysis of dynamical behaviors has important leading significance in the design and application of Hopfield neural networks.
For Hopfield neural networks, one of the most investigated problems in dynamical behaviors is that of the existence, uniqueness, and global asymptotic stability of the equilibrium point.The property of global asymptotic stability, which means that the domain of attraction of the equilibrium point is the whole space and many pseudostable points will be eliminated, is of importance from the theoretical point of view as well as in practical applications in several fields.In particular, globally asymptotically stable Hopfield neural networks were well studied for solving some classes of optimization problems and adaptive control.A globally asymptotically stable Hopfield neural network is guaranteed to compute the global optimal solution independent of the initial condition and avoid some spurious suboptimal response (Kennedy and Chua [14]; Michel and Gray [17]).Such globally asymptotically stable Hopfield neural networks can also be applied to solve model identification or computational tasks, and so on (Kelly [13]).Thus many scientific and technical workers have been joining the study fields with great interest, and many results about global asymptotic stability of Hopfield neural networks with constant delays and continuous varying delays with boundedness or without delays have been reported (Hopfield [10]; Cao [1]; Zhang and Jin [20]; Marcus and Westervelt [16]; Mohamad [18]; Hirsch [8]; Hopfield and Tank [11]; Liu and Dickson [15]; Huang and Cao [12]; Guan and Chen [7]; Chen [2]).It is well known that the use of constant fixed delays and continuous varying delays provides a good approximation in some simple circuits.However, due to the presence of parallel pathways with a verity of axon sizes and lengths, neural networks usually have a spatial extent and there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays.Under these circumstances, the signal propagation is not instantaneous and cannot be described with discrete delays.An appropriate way is to introduce continuously distributed delay determined by a delay kernel.To the best of our knowledge, few authors have studied the global asymptotic stability of Hopfield neural networks with distributed delays, for example, Feng and Plamondon [4], Gopalsamy and He [5], Zhang and Jin [20], and so on.
On the other hand, Hopfield neural networks with dynamical thresholds have not received any attention until the nineties of the twentieth century.However, Gopalsmy and Leung [6] first considered the Hopfield neural networks with distributed delays under dynamical thresholds as follows: where a, b, and c are nonnegative constants.For their physical meaning of signs in (1.1), one can refer to Gopalsmy and Leung [6].By using Lyapunov function, they established a sufficient condition ensuring global asymptotic stability of the unique equilibrium point x * = 0 of system (1.1) with the case c = 0.In [22] Zhang et al. consider the more general model as follows: where f : R → R is a globally Lipschitz function.By using Brouwer's theorem and Lyapunov function, they established some sufficient conditions for global asymptotic stability and global exponential stability of equilibrium point x * for the cases c = 0 and c = 0.
F.-Y. Zhang and H.-F. Huo 3 Further, Zhang and Li [21] and Zhang et al. [22], respectively, considered the following more general model: where i = 1,...,n.f j : R → R is a global Lipschitz function.By using homeomorphism map in [21] and topological degrees tool in [22], constructing suitable Lyapunov functions, and applying the property of M-matrix, they obtained some conditions for the existence, uniqueness, and global asymptotic stability or global exponential stability for model (1.5).However, in all the above-cited literatures, the authors always assumed that the distributed delays k(s), k j (s), or k i j (s) satisfy the property (1.3).In [3] Cui considered the model (1.1) further.By using differential inequality and variations of constants, he dropped the condition (1.3) and obtained new criteria for global asymptotic stability of the equilibrium point x * = 0 of the system (1.1) with the case c = 0.However, the method that he used cannot be used for the case c = 0.In [23] Zhao considered the following model: He obtained some condition for global asymptotic stability by dropping the Lipschitz condition and the condition (1.3).Motivated by the above discussion, our aim in this paper is to study further the existence, uniqueness, and global asymptotic stability for the equilibrium point of the following Hopfield neural network (1.7) with distributed delays under dynamical thresholds: where i = 1,2,...,n, n denotes the numbers of units in the neural networks (1.7), x i (t) represents the states of the ith neuron at time t, a i j , b i j , c j , and d j are constants, a i j ≥ 0 denotes the strength of the jth neuron on the ith neuron, b i j ≥ 0 denotes a measure of the inhibitory influence of the past history of the jth neuron on the ith neuron, c j ≥ 0 denotes the neural threshold of the jth neuron, and d j > 0 denotes the rate with which the jth neuron will rest its potential to the resting state in isolation when disconnected from the networks and external inputs.k i j : [0,+∞) → [0,+∞) is a continuous delayed ker-function satisfying (1.2); f j denotes the output of the ith neuron at time t.
The initial condition associated with (1.7) is of the form where where φ(t) = (φ 1 (t),...,φ n (t)).In the results, we assume that the activation functions f j ( j = 1,...,n) are not the globally Lipschitz functions and the delayed ker-functions k i j (i, j = 1,...,n) do not satisfy the assumption (1.3).Here, we point out that our methods are different from differential inequality methods which appeared in Cui [3].Moreover, our conditions are easy to verify and apply in application fields and the design of Hopfield neural networks.
For convenience, we introduce some notations.Let x = (x 1 ,...,x n ) T denote a column vector (the symbol "T" denotes transpose of x), let |x| denote the absolute-value vector given by |x| = (|x 1 |, ...,|x n |) T , and let For matrices A and B, A ≥ B (A < B) means that each pair of corresponding elements of A ≥ B satisfies "≥" ("<").Especially, A is called a nonnegative matrix if A ≥ 0.
Existence and uniqueness of the equilibrium point
In this section, we will study the existence and uniqueness of the equilibrium point of the model (1.7).Before starting our main results, we first rewrite model (1.7) as follows: in which X(t) = (x 1 (t),...,x n (t)) T and where In order to study the existence and uniqueness of the equilibrium point, we consider the initial value problem associated with the autonomous system (2.1), in which the initial functions are given by (1.8).Let Ω be an open subset of R n .Lemma 2.1.Let G : Ω → R n be continuous and satisfy the following condition: corresponding to any η ∈ Ω and its neighborhood U, there exist a constant k > 0 and functions g j and Φ l ( j,l = 1,...,n), such that F.-Y. Zhang and H.-F. Huo 5 on U, where each g j : U → R is a continuously differentiable function in η satisfying the relation and each Φ l : R → R is continuous and of bounded variation on bounded subintervals.Then there exists a unique solution for the initial value problem system (1.8)-(2.2) on any interval containing the initial functions (1.8).
Proof.By Lemma 2.1, the system (1.8)-(2.2) has a unique solution x(t) = (x 1 (t),..., x n (t)) T , from this we obtain (2.8) where, for i, j = 1,...,n, (2.10) 6 Global stability for DCNNs The initial condition associated with system (2.9) is of the form Now we choose the initial functions as follows: (2.12) By Lemma 2.1, the system (2.8) or (2.9) has a unique solution.Obviously, y i (t) = 0 is the only solution of the system (2.8) or (2.9), which implies that x i (t) = x * i (i = 1,...,n) is the unique solution that satisfies (2.8), so there exists a unique point which satisfies (2.7) and guarantees the existence of a unique equilibrium point of the system (1.7).This completes the proof.
Global asymptotic stability of the equilibrium point
In this section, we will consider global asymptotic stability of system (1.7) and establish some new criteria which do not require the signal propagation functions f i to satisfy the Lipschitz condition and the delayed ker-functions k i j to satisfy the assumption (1.3) which turns out that the hypotheses on the parameters of the system are less restrictive.We note that the equilibrium point x * of system (1.7) is globally asymptotically stable if and only if the equilibrium point y * = 0 of system (2.9) is globally asymptotically stable.
Proof.For any ε > 0, let P = (p 1 ,..., p n ) = (I − M) −1 Eε, where E = (1,...,1) T , which implies n j=1 m i j p j + ε = p i , (i = 1,...,n).We firstly prove that the set S = {Ψ ∈ C| Ψ ∞ ≤ P} is a positive invariant set of system (2.9).In view of condition (H2) and applying sufficient and necessary conditions of M-matrix, we know that (I − M) −1 ≥ 0, and so (I − M) −1 Eε ≥ 0. In the following, we show that In order to prove (3.2), we only need to prove that y(t) ≤ qP for t ≥ 0 and any given q > 1.For the sake of contradiction, we suppose that there exist some i ∈ {1, ...,n} and t 1 > 0 such that ) By (H1), we have ( Noting that q > 1 and n j=1 m i j qp j + qε = qp i , we have hence we have a contradiction between (3.4) and (3.7), and so (3.3) holds; letting q → 1, then (3.2) holds.Therefore, set S is a positive invariant set of system (2.9), and it follows that the equilibrium point y(t) = 0 of system (2.9) is uniformly stable by using the relation between positive invariant set and uniform stability.The proof is complete.
In view of the above proof, we have the following result.
Proof.By the proof of Theorem 3.1, we have that for any given Ψ ∈ C there must be q > 1 such that all the solutions of system (2.9) satisfy |y(t)| < qp for t ≥ 0 and Ψ ∞ < qp, which implies that the conclusion of Corollary 3.2 is true.
If the signal propagation functions g j are globally Lipschitz functions, we have the following corollary.
F.-Y. Zhang and H.-F. Huo 9 Corollary 3.5.Assume that the signal propagation functions f j are globally Lipschitz with Lipschitz constants L j > 0 and condition (H2) holds.Then the equilibrium point x * of system (1.7) is globally asymptotically stable.Remark 3.6.When n = 1 and the signal propagation functions f (t) = tanhx, the system (1.7) becomes the system (1.1) which is firstly considered by Gopalsmy and Leung [6]; in their article, they assumed that the delayed ker-function k(s) satisfies the condition (1.3) and obtained global asymptotic stability of the unique equilibrium x * = 0 under the conditions a(1 − b) < 1 and a(1 + b) < 1 for the case c = 0. Cui [3] also considered the system (1.1); he deleted the condition (1.3) and obtained global asymptotic stability of the unique equilibrium x * = 0 under the conditions a(1 − b) < 1 and a(1 + b) < 1 for the case c = 0. Obviously, in the above-mentioned article, the signal propagation functions f (t) = tanht is globally Lipschitz.Zhang et al. [22] considered the system (1.4) which generalizes the system (1.1); they also assumed that the delayed-ker-function k(s) satisfies the condition (1.3) and the signal propagation function f (t) is globally Lipschitz, and obtained global asymptotic stability of the unique equilibrium x * = 0 under the condition a(1 + b) < d for the cases c = 0 and c = 0.However, in Theorem 3.4 and Corollary 3.5 of this paper, the signal propagation functions f j (t) are not globally Lipschitz and the condition (1.3) is not also needed.Clearly, our results in this paper contain and improve those given in above-mentioned literatures, and the conditions in this paper require less restrictive parameters than those given also in the above-mentioned articles.
Remark 3.7.Zhang and Li [21] and Zhang et al. [22] considered the system (1.5) which further generalizes the system (1.1).When the signal propagation functions f j (t) are globally Lipschitz with Lipschitz constants L j and condition (1.3) holds, they proved that the unique equilibrium point x * is globally asymptotically stable under the condition M = DL −1 − A(E + B) is an M-matrix, clearly M is an M-matrix is equivalent to the condition ρ(M) < 1.However, the condition (1.3) and the Lipschitz condition associated to the signal propagation functions f j (t) are not necessary in Theorem 3.4 of this paper; obviously, the results in this paper improve the previous works.
Two illustrative examples
In this section, we will give two examples to illustrate our results. | 3,337.6 | 2006-07-06T00:00:00.000 | [
"Mathematics"
] |
Candida albicans Ethanol Stimulates Pseudomonas aeruginosa WspR-Controlled Biofilm Formation as Part of a Cyclic Relationship Involving Phenazines
In chronic infections, pathogens are often in the presence of other microbial species. For example, Pseudomonas aeruginosa is a common and detrimental lung pathogen in individuals with cystic fibrosis (CF) and co-infections with Candida albicans are common. Here, we show that P. aeruginosa biofilm formation and phenazine production were strongly influenced by ethanol produced by the fungus C. albicans. Ethanol stimulated phenotypes that are indicative of increased levels of cyclic-di-GMP (c-di-GMP), and levels of c-di-GMP were 2-fold higher in the presence of ethanol. Through a genetic screen, we found that the diguanylate cyclase WspR was required for ethanol stimulation of c-di-GMP. Multiple lines of evidence indicate that ethanol stimulates WspR signaling through its cognate sensor WspA, and promotes WspR-dependent activation of Pel exopolysaccharide production, which contributes to biofilm maturation. We also found that ethanol stimulation of WspR promoted P. aeruginosa colonization of CF airway epithelial cells. P. aeruginosa production of phenazines occurs both in the CF lung and in culture, and phenazines enhance ethanol production by C. albicans. Using a C. albicans adh1/adh1 mutant with decreased ethanol production, we found that fungal ethanol strongly altered the spectrum of P. aeruginosa phenazines in favor of those that are most effective against fungi. Thus, a feedback cycle comprised of ethanol and phenazines drives this polymicrobial interaction, and these relationships may provide insight into why co-infection with both P. aeruginosa and C. albicans has been associated with worse outcomes in cystic fibrosis.
Introduction
Pseudomonas aeruginosa is an opportunistic pathogen capable of causing severe nosocomial infections and infections in immunocompromised patients. P. aeruginosa is a common pathogen of individuals with cystic fibrosis (CF), a genetic disease that is caused by a mutation in the gene coding for the CFTR ion transporter and strongly associated with chronic, recalcitrant lung infections. Altered CFTR function leads to a fluid imbalance that results in thick, sticky mucus in the lungs that is difficult to clear, thus creating a hospitable environment for microbial growth, biofilm formation, and persistence. While P. aeruginosa is a common microbe in the CF lung, it is rarely the only microbe present [1][2][3][4][5]. Co-infections of P. aeruginosa with other bacterial and fungal species are common, and there is a need to understand how these complex multi-species infections impact disease course and treatability. For example, the presence of the fungus Candida albicans correlates with more frequent exacerbations and a more rapid loss of lung function in CF patients [6,7]. Additional studies are needed to determine if the presence of the fungus contributes to more severe disease.
Published reports strongly suggest that in the CF lung, P. aeruginosa forms biofilms [8], described as hearty aggregations of cells in a sessile group lifestyle that includes extracellular matrix comprised of proteins, membrane vesicles, DNA, and exopolysaccharides. A biofilm existence provides many advantages to P. aeruginosa including increased antibiotic tolerance [9,10]. As with many Gram-negative species, P. aeruginosa biofilm formation is positively regulated by the secondary signaling molecule cyclic-di-GMP (c-di-GMP) [11]. C-di-GMP is formed from two molecules of GTP by diguanylate cyclases (DGCs) and its levels inversely correlate to motility. High levels of c-di-GMP promote biofilm formation in a number of ways including via increased matrix production and decreased flagellar motility [12][13][14].
P. aeruginosa also produces a class of redox-active virulence factors called phenazines. In CF sputum, the phenazines pyocyanin (PYO) and phenazine-1-carboxylate (PCA) are found in micromolar (5-80 mM) concentrations, and their levels are inversely correlated with lung function [15]. Phenazines play a role in the relationships between P. aeruginosa and eukaryotic cells. Several studies have shown how phenazines can negatively affect mammalian physiology [16,17]. In addition, phenazines impact different fungi, including C. albicans. At high concentrations, phenazines are toxic to C. albicans, and lower concentrations of phenazines reduce fungal respiration and impair growth as hyphae [18]. Phenazines figure prominently in shaping the chemical ecology within mixed-species communities. For example, when exposed to low concentrations of phenazines, C. albicans increases the production of fermentation products such as ethanol by 3 to 5 fold [18]. Furthermore, P. aeruginosa-C. albicans co-cultures form red derivatives of 5methyl-phenazine-1-carboxylic acid (5MPCA) that accumulate within fungal cells [19].
In the present study, we show that ethanol produced by C. albicans stimulated P. aeruginosa biofilm formation and altered phenazine production. Ethanol caused a decrease in surface motility in both strains PA14 and PAO1 concomitant with a stimulation in levels of c-di-GMP, a second messenger nucleotide that promotes biofilm formation. Through a genetic screen, we found that the diguanylate cyclase WspR, a response regulator of the Wsp chemosensory system, was required for this response. Elements upstream and downstream of WspR signaling were required for the ethanol response. Ethanol no longer stimulated biofilm formation in a mutant lacking WspA, the membranelocalized sensor methyl-accepting chemotaxis protein (MCP) that is involved in the activation of WspR [20]. In addition, an intact Pel exopolysaccharide biosynthesis pathway, known to be stimulated by c-di-GMP derived from the Wsp pathway [21,22], was also required for ethanol stimulation of biofilm formation. The effects were observed on both abiotic surfaces and a cell culture model for P. aeruginosa and P. aeruginosa-C. albicans airway colonization. We found that both exogenous and fungallyproduced ethanol enhanced the production of two phenazine derivatives known for their antifungal activity [19,23], 5MPCA and phenazine-1-carboxamide (PCN), through a Wsp-independent pathway and independent of ethanol catabolism. Because phenazines stimulate fungal ethanol production [18], we present evidence for a signaling cycle that helps drive this polymicrobial interaction.
Results
Ethanol stimulates biofilm formation and suppresses swarming in P. aeruginosa strain PA14 Our previously reported findings show that P. aeruginosa produces higher levels of two phenazines, PYO and 5MPCA [24,25], when cultured with C. albicans and that phenazines stimulate C. albicans ethanol production [18]. Thus, we sought to determine how fungally-derived ethanol affects P. aeruginosa. A concentration of 1% ethanol (v/v) was chosen for these studies based on the detection of comparable levels of ethanol in C. albicans supernatants from cultures grown with phenazines [18]. The presence of 1% ethanol in the culture medium did not affect P. aeruginosa growth in minimal M63 medium with glucose ( Fig. S1) or LB (doubling time of 3662 min in LB versus 3962 min in LB with ethanol), or on solid LB medium (Fig. S1 inset) except that the final culture yield in M63 was slightly higher in cultures amended with ethanol (Fig. S1).
When we performed a microscopic analysis of the effects of ethanol on P. aeruginosa strain PAO1, we observed a significant increase in attachment of cells to the bottom of a titer dish well within 1 h (1565 cells per field in vehicle treated compared to 3166 cells per field in cultures with ethanol, p,0.01) and development of microcolonies was strongly enhanced (Fig. 1A). Ethanol also promoted an increase in the number of attached cells and microcolonies in cultures of another P. aeruginosa strain, PA14 (Fig. 1B).
Using two assays that assess biofilm-related phenotypes (swarming motility and twitching motility), we sought to gain additional insight into how ethanol impacted biofilm formation. Our initial studies focused on strain PA14. We found that ethanol repressed swarming motility, a behavior that is inversely correlated with biofilm formation (Fig. 1C). Ethanol did not affect type IVpili-dependent twitching motility, a form of movement that is required for microcolony formation in biofilms on plastic (Fig. S2B) [26].
Ethanol catabolism is not necessary for the inhibition of swarming
Because P. aeruginosa can catabolize ethanol [27], we sought to determine if ethanol consumption contributed to the repression of swarming motility. P. aeruginosa first oxidizes ethanol to acetaldehyde by an ethanol dehydrogenase, ExaA, which requires the cofactor PQQ (pyrroloquinoline quinone) [27]. Acetaldehyde is further oxidized to acetate by an NAD+ dependent acetaldehyde dehydrogenase (ExaC), and the acetate is subsequently oxidized to acetyl-CoA by AcsA [28]. We retrieved the exaA::TnM, pqqB::TnM, and acsA::TnM mutants predicted to be defective in ethanol catabolism from the P. aeruginosa strain PA14 NR transposon library [29] and confirmed the transposon insertion sites by PCR (see Materials and Methods for more detail). As predicted, none of these mutants grew with ethanol as the sole carbon source, and growth on glucose was unaffected (Fig. S3A).
Author Summary
In many human infections, several species of microbes are often present. This is typically the case with the disease cystic fibrosis, characterized by thick mucus in the lungs that is colonized by bacteria and fungi. Here, we show evidence that interactions between the bacterium Pseudomonas aeruginosa and the fungus Candida albicans result in attributes of infection that are worse for the human host. We found that ethanol, such as that produced by C. albicans, causes increased levels of a signaling molecule in P. aeruginosa that promotes biofilm formation. Biofilm formation by P. aeruginosa is associated with infections that are more difficult to treat. Ethanol stimulated P. aeruginosa colonization of plastic surfaces and airway cells, and we identified components of this mechanism. Fungally-produced ethanol also changes the spectrum of phenazine toxins produced by P. aeruginosa, and phenazines are associated with worse lung function in people with cystic fibrosis. In light of the fact that phenazines interact with C. albicans to promote ethanol production, we propose a positive feedback loop between C. albicans and P. aeruginosa that contributes to worse disease. Our findings could have implications for the study and treatment of multi-species infections.
When we used these mutants in the swarm assay, we found no difference in the effects of ethanol on these three ethanol catabolism mutants in comparison to the wild-type parental strain (Fig. S3B) indicating that ethanol catabolism was not required for the ethanol response. Furthermore, other carbon sources such as glycerol, another fungal fermentation product, or choline, another two-carbon alcohol degraded by a PQQ-dependent enzyme, did not inhibit swarming motility (Fig. S4).
Ethanol increases c-di-GMP levels through WspR
Ethanol stimulated attachment and biofilm formation on plastic and inhibited swarming motility (Fig. 1). These two phenotypes are positively and negatively regulated by levels of the second messenger molecule c-di-GMP [30]. Thus, we measured intracellular levels of this dinucleotide in P. aeruginosa strain PA14 cells grown on swarm plates with or without 1% ethanol for 16.5 h as described previously. We found a 2.4-fold increase in c-di-GMP levels in cells exposed to ethanol (Fig. 2).
To identify the enzyme(s) responsible for this increase, we screened a collection of 31 P. aeruginosa strain PA14 mutants [31] defective in different genes predicted to encode proteins that may modulate c-di-GMP levels based on the detection of a DGC and/ or an EAL domain [22,31]. We found that one mutant, DwspR, was strikingly resistant to the repression of swarming by ethanol (Fig. 3). As expected, this mutant also had a slight hyperswarming phenotype when compared to the wild type in control conditions [31], and both phenotypes were complemented by the wild-type wspR allele on an arabinose-inducible plasmid when grown in the presence of 0.02% arabinose (Fig. 3). The empty vector (EV) control exhibited a swarming pattern comparable to that of the DwspR mutant.
WspR is a response regulator with a GGDEF domain [32], which is associated with diguanylate cyclase activity [20]. Consistent with the observation that DwspR continued to swarm on medium with ethanol, c-di-GMP levels were not different between cultures with and without ethanol in the DwspR background (Fig. 2). These data suggest that WspR activity, and thus c-di-GMP levels, are enhanced by ethanol.
WspR is known to regulate the production of the Pel polysaccharide [21,22], and production of Pel is associated with colony wrinkling and biofilm formation [33]. After 72 hours on swarm plates, we also observed that ethanol strongly promoted colony wrinkling while the addition of equivalent amounts of other carbon sources, such as glycerol or choline, did not have this effect. Furthermore, the colony wrinkling induced by ethanol was less apparent in a DwspR strain (not shown) and completely absent in a strain lacking pelA, an enzyme required for Pel biosynthesis (Fig. 4). The DpelA mutant, like the DwspR mutant, continued to Figure 1. Ethanol represses swarming and stimulates biofilm formation by P. aeruginosa. A. P. aeruginosa strain PAO1 attachment to the bottom of a polystyrene plastic well after 6 hours in medium with and without 1% ethanol (EtOH). B. P. aeruginosa strain PA14 attachment to plastic as assessed by quantification of microcolonies per field in wells containing medium with or without ethanol for 7 h. Error bars represent the standard deviation (p,0.01 as determined by a student's t-test, N = 12). C. P. aeruginosa strain PA14 swarming in the absence and presence of 1% ethanol. Images are representative of results in more than ten independent experiments. doi:10.1371/journal.ppat.1004480.g001 swarm in the presence of ethanol (Fig. 4) suggesting that the repression of swarming in the presence of ethanol was, at least in part, due to increased Pel production.
Ethanol induces c-di-GMP signaling in P. aeruginosa strain PAO1 through WspA and WspR As we found that ethanol stimulated biofilm formation in P. aeruginosa wild-type strains PA14 and PAO1 ( Fig. 1) and that WspR mediated the ethanol effect in strain PA14, we also examined the role of WspR in the ethanol response in strain PAO1. As shown above, PAO1 wild-type cells had increased early attachment and subsequent microcolony formation on plastic when ethanol was added to the medium (Fig. 5). Consistent with our model that ethanol is acting through WspR, ethanol did not stimulate surface colonization in the PAO1 DwspR mutant (Fig. 5). We also examined the ethanol-responsive phenotype for P. aeruginosa strain PAO1 DwspA, which lacks the membrane bound receptor that is the most upstream element described in the Wsp system [20]. Like DwspR, DwspA did not show increased attachment to plastic upon the addition of ethanol (Fig. 5) suggesting that both the MCP sensor and the WspR response regulator were required for the response to ethanol. Ethanol also promoted colony wrinkling in strain PAO1, as was observed in strain PA14, consistent with the prediction that increased WspR activity would lead to increased matrix production. Enhanced wrinkling with ethanol was shown most clearly for both strains in non-motile (flgK) mutants which formed colonies of similar size regardless of the presence of ethanol (Fig. S5). Because strain PAO1 WT does not swarm robustly in control conditions, the effects of ethanol on swarming in strain PAO1 were not quantified.
Ethanol promotes WspR clustering and a functional Wsp system is required for this effect
Previous studies have shown that the fluorescently-tagged WspR protein forms intracellular clusters when in its active phosphorylated form upon incubation of cells on an agar surface, and cluster formation is positively correlated with WspR activity [20]. To complement the mutant analyses, we determined if ethanol also promoted WspR-YFP clustering, and if known components of the WspR activation system were required for WspR stimulation by ethanol. To facilitate these analyses, we used the WspR variant WspR E253A -YFP, which forms larger clusters that are more easily visualized [34]. In these studies, we observed a two fold increase in WspR clustering in the presence of ethanol (Fig. 5C). To determine if WspF, a methylesterase that negatively regulates WspR activity [21], was involved in the regulation of WspR in response to ethanol, we also assessed WspR clustering in a DwspF background where WspR is constitutively active. In DwspF, WspR clustering was higher than in the wspF+ reference strain, and WspR clustering was not further stimulated by ethanol, lending support for the model that ethanol was acting through the Wsp system and not through an independent pathway for WspR activation.
C. albicans and ethanol promote airway epithelial cell monolayer colonization
To understand the effects of ethanol on P. aeruginosa in a wellestablished CF-relevant disease model, we studied the effects of ethanol on P. aeruginosa strain PAO1 in the context of bronchial epithelial cells with the most common CF genotype (homozygous CFTRDF508) [35,36]. We cultured P. aeruginosa strain PAO1 with the epithelial cells in medium without and with 1% ethanol, and observed an obvious enhancement in the size of biofilm microcolonies (Fig. 6A) and a 2.2-fold increase in colony forming units (CFUs) on the airway cells with ethanol (Fig. 6B). When the same experiment was performed with the DwspR or DwspA mutants, no stimulation by ethanol was observed. Ethanol alone did not impact epithelial cell viability as measured by an LDH release assay (9.44%60.98 LDH release for control and 10.47%61.2 LDH release with ethanol, N = 3) and other studies have also found these concentrations of ethanol to be well below those that cause overt toxicity to epithelial cells or disruption of epithelial barrier integrity [37,38].
When P. aeruginosa PAO1 and C. albicans were co-inoculated into epithelial cell co-cultures, 4.7-fold more P. aeruginosa CFUs were found to be associated with the monolayer after 6 h (Fig. 7).
To determine if C. albicans-derived ethanol contributed to the enhanced colonization by P. aeruginosa in the presence of C. albicans, we used a C. albicans adh1/adh1 mutant that produced lower levels of ethanol. We constructed the adh1 null strain and its complemented derivative, and confirmed that the absence of ADH1 caused a reduction in ethanol by HPLC analysis of culture supernatants, a finding consistent with previously published work [39]. When P. aeruginosa was co-cultured with the C. albicans adh1/adh1 strain, there was a significant decrease in P. aeruginosa CFUs recovered, and this defect was corrected upon complementation with the ADH1 gene in trans. Furthermore, there was no significant difference in the stimulation of colonization by wild-type or adh1/adh1 mutant C. albicans in the DwspR or DwspA backgrounds (Fig. S6). Together, these data strongly suggest that C. albicans-produced ethanol promotes P. aeruginosa colonization of both abiotic and biotic surfaces through activation of the Wsp system, which likely exerts these effects through promoting Pel production.
Exogenous and C. albicans-produced ethanol alters P. aeruginosa phenazine production through a WspRindependent pathway In part, these studies were instigated by the finding that P. aeruginosa phenazines strongly stimulate C. albicans ethanol production [18]. Thus, we were intrigued by the observation that colonies on ethanol-containing swarm plates, but not control plates, contained abundant emerald green crystals, similar to those formed by reduced phenazine-1-carboxamide (PCN) [40] (Fig. 8A, Fig. 4 and Fig. S7A), which could indicate a reciprocal relationship between ethanol and phenazines. Phenazine concentrations were measured using HPLC in either extracts from P. aeruginosa strain PA14 colonies or extracts from the underlying agar. In extracts from wild type colonies, PCN and PCA concentrations were 24.2-and 5.8-fold higher, respectively, when ethanol was in the medium (Fig. S7B); much smaller differences in PCN and PCA concentrations were found in extracts of the underlying agar (Fig. S7C). Because PCA is the precursor for all other phenazine derivatives, including PCN (Fig. S7A), we further explored the effect of ethanol on PCA production. For this, we measured levels of PCA in a strain lacking all of the PCA modifying enzymes (PhzH, PhzM, and PhzS; see Fig. S7A for pathways) [41]. We found that DphzHMS colonies contained 1.7fold more PCA (Fig. S7D) and released 1.3-fold more PCA into the agar (Fig. S7E) when grown in the presence of ethanol compared to control conditions. These data suggest that ethanol may cause a minor increase in PCA, and that it has greater effects on which species of phenazines are formed. The differences in phenazine levels or profiles did not appear to be responsible for ethanol effects on swarming as the Dphz mutant [42], which lacks phzA1-G1 and phzA2-G2, was like the wild type in that its swarming was repressed in the presence of ethanol, but it swarmed robustly in its absence (Fig. S7F).
To determine if there was a connection between ethanol effects on Wsp signaling and ethanol stimulation of PCN levels, we assessed PCN accumulation in mutants lacking wspR or pelA. We found that both strains responded like the wild type in terms of PCN crystal formation upon growth with ethanol ( Fig. S8A and C. albicans promotes P. aeruginosa strain PAO1 WT biofilm formation on airway epithelial cells in part through ethanol production. P. aeruginosa PAO1 WT was cultured with a monolayer of DF508 CFTR-CFBE cells alone or with C. albicans CAF2 (reference strain), the C. albicans adh1/adh1 mutant (adh1), and its complemented derivative, adh1/adh1+ADH1 (adh1-R). Data are combined from three independent experiments with 3-5 technical replicates per experiment, (* represents a statistically significant difference (p,0.05) between indicated strains). Error bars represent one standard deviation. doi:10.1371/journal.ppat.1004480.g007 Fig. 4). Similarly, ethanol catabolic mutants still showed enhanced levels of PCN crystals upon ethanol exposure (Fig. S8A).
Having observed alterations in the phenazine profile induced by ethanol, we examined the impact of ethanol in the production of a fourth phenazine derivative, 5MPCA, which we have previously shown to be released by P. aeruginosa when in the presence of C. albicans [19,24]. Because P. aeruginosa-produced 5MPCA is converted into a red pigment within C. albicans cells, 5MPCA accumulation can be followed by observing the formation of a red color where P. aeruginosa and C. albicans are cultured together [19,24]. To examine the effects of ethanol production on the accumulation of red 5MCPA derivatives, we again used the C. albicans adh1/adh1 mutant and its complemented derivative. Strikingly, when P. aeruginosa was cultured on lawns of the C. albicans adh1/adh1 strain, a strong decrease in red pigmentation was observed (Fig. 8B). When ADH1 was provided in trans to the adh1/adh1 mutant, accumulation of the red pigment was restored (Fig. 8B). Neither ethanol catabolism nor WspR activity was required for the stimulation of levels of 5MPCA derivatives by P. aeruginosa on fungal lawns (Fig. S8B).
Together, our data suggest that ethanol only slightly increases total phenazine production ( Fig. S7D and E) but more strongly affects the derivatization of phenazines in P. aeruginosa colonies ( Fig. S7B and C). Furthermore, C. albicans-produced ethanol stimulated P. aeruginosa 5MPCA production, and in turn, phenazines, including 5MPCA analogs, promote ethanol production [18]. Thus, it appears that P. aeruginosa-C. albicans interactions include a positive feedback loop that promotes fungal ethanol production and P. aeruginosa Wsp-dependent biofilm formation when the two species are cultured together.
Discussion
This paper reports new effects of ethanol on P. aeruginosa virulence-related traits, and illustrates that these effects occur through multiple pathways (Fig. 9). We found that ethanol: i) promoted attachment to and colonization of plastic and airway epithelial cells, ii) decreased swarming, but not twitching motility, iii) increased Pel-dependent colony wrinkling, and iv) increased cdi-GMP levels. All of these responses to ethanol required the diguanylate cyclase WspR. WspR is part of the Wsp chemosensory system, which is a member of the ''alternative cellular function'' (ACF) chemotaxis family [20,21,43]. The Wsp chemosensory system is different from the chemotaxis systems in P. aeruginosa in terms of its localization and response to environmental signals [44]. The membrane-bound receptor WspA and the CheA homologue WspE are necessary for the Wsp system to function, and WspE activates WspR via phosphorylation [44]. Consistent with our hypothesis that the entire Wsp system is required for the response to ethanol, we found that a wspA mutant was also insensitive to the effects of ethanol on biofilm formation (Fig. 5). The activation of WspR was independent of ethanol catabolism and independent of phenazine production. Ethanol and other alcohols can increase the rigidity of cell membranes by promoting an altered composition of fatty acids [45], and future studies will determine if the Wsp system, particularly the membrane localized WspA, can be activated by changes in the lipid composition or changes in the physical properties of P. aeruginosa membranes. Because the Wsp system is also activated upon contact with a surface [20], it is intriguing to consider how these stimuli might be similar. Ethanol had mild, if any, effects, on biofilm formation at the air-liquid interface in a commonly used 96-well microtiter dish assay in either strain (Fig. S9) suggesting that in this environment, different Wsp activating cues were not additive.
C. albicans and other Candida spp. are commonly detected in the sputum of CF patients, and clinical studies suggest that the presence of both P. aeruginosa and C. albicans results in a worse prognosis for CF patients [6,46]. In vivo ethanol production by other fungi has been documented [47,48], but a link between Candida spp. and ethanol production in the lung has not yet been made. It is important to note, however, that ethanol was one of two metabolites in exhaled breath condensate that differentiated CF from non-CF individuals [49]. Thus, regardless of the source of ethanol, be it fungal or bacterial, the effect of ethanol on pathogens such as P. aeruginosa is likely of biological and clinical relevance. We tested this interaction in the context of CF, but this polymicrobial interaction likely occurs in other contexts as well.
As shown above, ethanol promoted biofilm formation and likely concomitant increases in drug tolerance. In the airway epithelial cell system, P. aeruginosa CFU recovery was increased 3-fold by addition of ethanol (Fig. 6B) and 4.7-fold by co-culture with C. albicans (Fig. 7). A two-fold difference is comparable to the differences in colonization between wild-type P. aeruginosa strains and mutants lacking genes known to play a role in virulence in animal models. For example, a DplcHR mutant lacking hemolytic phospholipase C or a Danr strain defective in a global regulator have 1.3-to 2.6-fold fewer CFUs recovered from airway cells compared to wild type, and notable differences in animal models [50,51]. Hence the presence of ethanol may result in increased Figure 8. Ethanol leads to higher levels of PCN crystal formation and 5MPCA derivatives. A. P. aeruginosa strain PA14 wild type (WT) was grown on medium without and with 1% ethanol. With ethanol, PCN crystals form and the colony has a yellowish color likely attributed to reduced PCN. B. P. aeruginosa strain PA14 WT was cultured on lawns of C. albicans CAF2 (WT reference strain), the C. albicans adh1/adh1 mutant, and its complemented derivative (adh1/adh1+ADH1); the PA14 Dphz mutant defective in phenazine production was plated on the C. albicans CAF2 for comparison. doi:10.1371/journal.ppat.1004480.g008 virulence of P. aeruginosa in the host. Ethanol has also been shown to promote P. aeruginosa conversion to a mucoid state [52], in which the exopolysaccharide alginate is overproduced; mucoidy is common in CF isolates and is correlated with a decline in lung function [53,54]. Ethanol has been shown to enhance virulence and biofilm formation by other lung pathogens such as Staphylococcus aureus [55] and Acinetobacter baumanii [56][57][58][59] via mechanisms that have not yet been described. Like in P. aeruginosa (Fig. S1A), ethanol caused a slight stimulation of growth in A. baumanii [58].
In addition to the effects of ethanol on P. aeruginosa, ethanol is an immunosuppressant that negatively influences the lung immune response [60][61][62][63][64]. In a mouse model, ethanol inhibits lung clearance of P. aeruginosa by inhibiting macrophage recruitment [65]. Together, these observations suggest that in mixed infections, P. aeruginosa may promote the production of ethanol by fungi, and that fungally-produced ethanol may in turn enhance the virulence and persistence of co-existing pathogens, and thus may directly impact the host.
It is not yet known how ethanol influences the spectrum of P. aeruginosa phenazines produced. In a previous study, we found evidence for increased production and release of 5MPCA when P. aeruginosa is grown in co-culture with C. albicans, and that live C. albicans is required for this effect [19]. More recent studies show that C. albicans ethanol production increased in the presence of even very low concentrations of the 5MPCA analog phenazine methosulfate [18], that the 5MPCA-like compounds were even more effective inhibitors of fungi than PCA and PYO, the two phenazines normally produced when P. aeruginosa is grown in mono-culture. Here, our findings suggest a feedback loop in which C. albicans-produced ethanol promoted the release of phenazines (Fig. 7) that may promote further ethanol production [18]. It is also important to consider that some studies have reported that 5MPCA and PCN have enhanced antifungal activity when compared to PCA and PYO [19,23,24]. The ethanol-induced changes in PCA were not as dramatic when compared to the ethanol-induced changes in PCN and 5MPCA, suggesting that ethanol mainly affected the biosynthetic steps after the formation of PCA leading to its conversion to PCN, 5MPCA and PYO. In different settings, such as liquid cultures or in clinical isolates lacking activity of LasR, a transcriptional regulator for quorum sensing that controls phenazine production, the presence of C. albicans enhanced the production of 5MPCA and PYO [24,25]. Taken together, all these observations indicate that fungallyproduced ethanol may enhance the conversion of PCA to end products such as PCN, 5MPCA and PYO.
These studies indicate how microbial species can alter the behavior of one another and suggest that the nature of these dynamic interactions can change depending on the context. In the rhizosphere, where pseudomonad antagonism of fungi includes the colonization of fungal hyphae and phenazine production, the enhancement of fungally-produced ethanol by phenazines and stimulation of biofilm formation and phenazine production by ethanol may create a cycle that is relevant to biocontrol [23,66,67]. In chronic infections where these two species are found together, such as in chronic CF-associated lung disease, this molecular interplay may be synergistic and promote long-term colonization of both species in the host. These findings indicate that the treatment of colonizing fungi may be beneficial due to their effects on other pathogens even if the fungi themselves are not acting as overt agents of host damage.
Strains, media, and growth conditions
Bacterial and fungal strains and plasmids used in this study are listed in Table S1. Bacteria and fungi were maintained on LB [68] and YPD (2% peptone, 1% yeast extract, and 2% glucose) media, respectively. When stated, ethanol (200-proof), choline chloride or glycerol was added to the medium (liquid or molten agar) to a final concentration of 1%. Control cultures received an equivalent volume of water. When ethanol was supplied as a sole carbon source, glucose and amino acids were omitted. Mutants from the PA14 Non-Redundant (NR) Library were grown on LB with 30 mg/mL gentamicin [29]. When strains for the NR library were used, the location of the transposon insertion was confirmed using sets of site-specific primers followed by sequencing of the amplicon. The primers are listed in Table S2.
Growth curve analysis of P. aeruginosa in the presence of ethanol For growth curves, overnight cultures were diluted into 5 ml fresh medium (LB or M63 with 0.2% glucose [69] with or without ethanol) to an OD 600 nm of ,0.05 and incubated at 37uC on a roller drum. Culture densities below 1.5 were measured directly in Figure 9. Our proposed model for the impacts of fungallyproduced ethanol on P. aeruginosa behaviors. Our previous work has shown that P. aeruginosa phenazines increase fungal ethanol production. Here, we show that ethanol stimulates the Wsp system, leading to a WspR-dependent increase in c-di-GMP levels and a concomitant increase in Pel production and biofilm formation on plastic and on airway epithelial cells. In addition, ethanol altered phenazine production by promoting 5MPCA release and the accumulation of PCN. doi:10.1371/journal.ppat.1004480.g009 the culture tubes using a Spectronic 20 spectrophotometer. At higher cell densities, diluted culture aliquots were measured using a Genesys 6 spectrophotometer.
Quantification of P. aeruginosa attachment to plastic and airway epithelial cells
To measure the attachment of cells to the plastic surface in 6well or 12-well untreated polystyrene plates, wells were inoculated with a suspension of cells at an initial OD 600 nm of 0.002 from overnight cultures. Every 90 minutes, the culture medium was removed and fresh medium was supplied. Pictures were taken using an inverted Zeiss Axiovert 200 microscope with a long distance 636 DIC objective at specified intervals. To quantify the number of cells or microcolonies in control cultures compared to cultures with ethanol, images were captured, randomized, and analyzed by a researcher who was blind to the identity of the sample at the time of analysis. In each experiment, more than 10 fields were counted for each strain. Microcolonies were defined as clusters of more than 5 cells in physical contact with one another. Biofilm formation on plastic microtiter dishes were performed and analyzed using the crystal violet assay as described in [55] and biofilm values were measured by quantification of dye as measured absorbance at 650 nm.
The analysis of P. aeruginosa colonization of airway epithelial cells was performed using CFBE human bronchial epithelial cells (CFBE410 2 ) with the CFTRDF508/DF508 genotype [70] as described previously [35,36]. For imaging, cells were grown in 6well glass bottom dishes (MatTek). For quantification of attached cells, CFBEs were grown in 6 or 12 well plates. P. aeruginosa strain PAO1 cells were added at an MOI of 30:1, and the medium was exchanged every 1.5 hours. For experiments with C. albicans, PAO1 cells and C.albicans were added together to CFBE monolayers, where C.albicans was at an MOI of 10:1 with respect to the epithelial cells. Pictures were taken using a Zeiss Axiovert 200 microscope with a 636 DIC objective at specified intervals. We performed multiple experiments with technical replicates (between three and six) on different days and analyzed the data with a one-way analysis of variance and Tukey's post hoc t-test using Graph Pad Prism 6. We observed that cells from different passages had differences in the mean attachment across all samples from that day. Thus, we normalized values to the mean across all samples from each experiment. LDH release was measured after six hours using the Promega CytoTox96 Non-Radioactive Cytotoxicity kit as described in the manufacturer's instructions.
Analysis of P. aeruginosa swarming and twitching motility
Swarming motility was tested by inoculating 2.5 mL of overnight cultures on fresh M8 (M8 salts without trace elements supplemented with 0.2% glucose, 0.5% casamino acids, and 1 mM MgSO 4 ) containing 0.5% agar as described previously [71]. Plates were incubated face up at 37uC with 70-80% humidity in stacks of no more than 4 for 16.5 hrs. To quantify the degree of swarming, percent coverage of the plate was measured using ImageJ software [72]. Twitching motility was analyzed as described previously [26].
Cyclic-di-GMP measurements
Cells were collected from swarm plates after incubation at 37uC for 16
Microscopic analysis of WspR
Sample preparation and microscopy were performed as previously described [20,34]. To analyze liquid-grown cells, cultures were grown at 37uC while shaking to an optical density at 600 nm (OD 600 ) of 0.3 in M9 medium (16 M9 salts pH 7.4, 2 mM MgSO 4 , 0.1 mM CaCl 2 , 0.2% glycerol, 0.2% casamino acids and 10 mg/ml thiamine HCl). 1% arabinose was included for induction of wspR, and 1% ethanol was added when comparing its effect on WspR clustering. From each culture, 3 ml were spotted onto a 0.8% agarose PBS pad on a microscope slide and then covered with a coverslip.
More than 100 cells were counted for each condition.
P. aeruginosa-C. albicans co-cultures
Preformed lawns of C. albicans CAF2 and adh1/adh1 were prepared by spreading 700 mL of a YPD-grown overnight culture onto a YPD 1.5% agar plate followed by incubation at 30uC for 48 hr. Exponential phase P. aeruginosa liquid cultures were spotted (5-10 mL) onto the C. albicans lawns, then incubated at 30uC for an additional 24 to 72 hours.
Analysis of phenazines
Overnight cultures of P. aeruginosa PA14 wild-type and DphzHDphzMDphzS strains were grown in LB at 37uC (shaken at 250 rpm). Ten microliters of each culture were spotted onto a track-etched membrane (Whatman 110606; pore size 0.2 mm; diameter 2.5 cm) that was placed on a 1.5% agar M8 medium supplemented with either vehicle (water) or 1% v/v ethanol. Plates contained 3 ml of medium in a 35610 mm agar plate (Falcon). The colonies were incubated at 37uC for 24 hours and then at room temperature for 72 hours, after which phenazines were extracted from the colonies and agar separately. Each track-etched membrane with a colony was lifted off the agar plates and nutated in 5 mL of 100% methanol overnight at room temperature. Similarly, the agar was nutated overnight in 5 mL of 100% methanol. Colony and agar extracts were filtered (0.2 mm pore) and phenazines in the extraction volume (5 mL) were quantified by high-performance liquid chromatography as previously described [41] at a flow rate of 0.4 mL/min.
Statistical analyses
All data were analyzed using Graph Pad Prism 6. The data represent the mean standard deviation of at least three independent experiments with multiple replicates unless stated otherwise. For normally distributed data, comparisons were tested with Student's t-test. Figure S1 Ethanol does not affect P. aeruginosa PA14 WT growth. Growth kinetics in M63 medium with 0.2% (w/v) glucose and 0.5% (w/v) casamino acids with and without 1% ethanol (EtOH). Error bars represent one standard deviation; N = 3). Colony growth on the same medium with 1.5% agar with or without 1% ethanol is also shown (inset). (TIF) Figure S2 Ethanol does not inhibit twitching behavior in P. aeruginosa. Twitching motility in P. aeruginosa strain PA14 wild type and DpilA (a strain defective in twitching motility) in the absence and presence of 1% ethanol (EtOH). Average twitch diameters are 16 Figure S6 Candida albicans does not lead to ethanoldependent increases in colonization of airway epithelial cells in the DwspR and DwspA backgrounds. P. aeruginosa PAO1 DwspR and DwspA were cultured with a monolayer of DF508 CFTR-CFBE cells and either the C. albicans CAF2 (WT reference strain) or the C. albicans adh1/adh1 mutant (adh1). Data represent the average of three technical replicates per experiment and the experiment was performed twice. Error bars represent the standard deviation among replicates. (TIF) Figure S7 Ethanol stimulates PCN production but not PCA production in P. aeruginosa strain PA14. A. Phenazine biosynthetic pathway and enzymes necessary for phenazine modifications. B-E. Concentrations of PCN and PCA in 5 ml extracts from the colony (B) or the underlying agar (C). In B and C, the wild type (WT) was grown without and with 1% ethanol. In D and E, PA14 DphzHMS, which lacks the ability to transform PCA into phenazine derivatives, was used. The error bars represent standard deviations for the phenazines extracted from 6 samples; *, P.0.05; **, P#0.05; ***, P#0.01, ****, P# 0.001; ns, P.0.05. F. Swarm phenotype of the Dphz mutant without and with 1% ethanol. (TIF) Figure S8 Neither wspR nor ethanol catabolism are solely responsible for increased PCN or 5MPCA. A. Spot colonies of P. aeruginosa strain PA14 ethanol catabolism mutants and the DwspR strain were grown in the absence and presence of 1% ethanol for 8 days, then imaged. B. C. albicans CAF2 (wild type) lawns were spot inoculated with P. aeruginosa strain PA14 wild type (WT), DwspR, or exaA::TnM, and incubated at 30uC for 24 h, then at room temperature for 36 h. (TIF) Figure S9 Ethanol has modest, if any, effects on biofilm formation in a microtiter dish assay. P. aeruginosa strain PAO1 and PA14 were grown in M63 medium with glucose and casamino acids either without or with 1% ethanol (EtOH). While strain PAO1 showed modest stimulation at 24 h, strain PA14 did not show stimulation of biofilm at this time point. Biofilms were measured by crystal violet staining followed by solubilization and measured as absorbance at 650 nm. Differences between control and with ethanol were small but significant and reproducible (p, 0.05) for strain PAO1 and not significant for strain PA14. (TIF) | 9,160.2 | 2014-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
An intelligent water drop algorithm with deep learning driven vehicle detection and classification
: Vehicle detection in Remote Sensing Images (RSI) is a specific application of object recognition like satellite or aerial imagery. This application is highly beneficial in different fields like defense, traffic monitoring, and urban planning. However, complex particulars about the vehicles and the surrounding background, delivered by the RSIs, need sophisticated investigation techniques depending on large data models. This is crucial though the amount of reliable and labelled training datasets is still a constraint. The challenges involved in vehicle detection from the RSIs include variations in vehicle orientations, appearances, and sizes due to dissimilar imaging conditions, weather, and terrain. Both specific architecture and hyperparameters of the Deep Learning
Introduction
In the present scenario, the application of Unmanned Aerial Vehicle (UAV) has become a popular area in Remote Sensing (RS) domain, determined by its academic and commercial achievements [1].However, these practices are highly dissimilar for the same application, primarily attributed to the fact that the data acquisition using sensors is highlighted to be more flexible than the existing traditional methods [2].In optical Remote Sensing Images (RSIs), Object Detection (OD) corresponds to the action in which an assumed satellite or aerial image holds a single object or multiple objects that suit the period of interest and the location is identified for every forecast object in the image [3].The term 'object', employed in this study, denotes a general structure of the artificial objects such as vehicles, buildings, ships, etc. with sharp borders.In this terminology, the background location and landscape substances such as land use/land cover (LULC) parcels are not considered since it contains unclear borders and measures of the background location [4].In order to overcome the essential difficulty in aerial and satellite image analysis, the OD process plays a vital part in optical RSIs for a huge sort of uses such as the Geographic Information System (GIS) upgrades, LULC mapping, environmental hazard recognition, environmental observation, precision agriculture and urban development [5].In optical RSIs, the OD process frequently encompasses numerous growing tasks including huge differences in the visual appearance of objects affected by perspective variation, background clutter, brightness, shadow, and much more [6].
The OD process further plays an essential role in military as well as civilian uses using the RSIs [7].However, this process faces multiple hindrances in the form of changing visual form of the objects affected by illumination, obstruction, shadow, resolution, viewpoint variation, polarization, speckle sound, and much more [8].Moreover, the explosive development of the RSIs in both quality as well as quantity incurs heavy computational cost, which in turn complicates the real-time application of OD.Robust and quick vehicle recognition, from the RSIs, has prospective applications in emergency management, traffic surveillance and financial analysis.Besides, the location and density data of the vehicles act as a vital information for creating Intelligent Transport Systems [9].However, precise and strong vehicle classification from the RSIs remains one of the most challenging tasks to accomplish.Traditional vehicle recognition techniques are mainly based on hand-crafted features that are removed from the sliding windows with dissimilar measures.However, these models heavily depend on physically-intended features and it cannot effectively handle huge differences in both backgrounds as well as the targets [10].Currently, the Convolutional Neural Networks (CNNs) have been employed in aerial image OD and it attained promising outcomes.For example, You Only Look Twice (YOLT) and R-CNNs (Region-CNN) have been applied as a benchmark in various studies.
Numerous tasks are involved in vehicle recognition and classification process from the RSIs utilizing DL technique.One important difficulty is the restricted robustness of the existing techniques to meet the needs for different environmental conditions such as changing lighting, climate, and terrain.This drawback impacts the models' flexibility across dissimilar remote sensing states.In addition to these, it is also challenging to achieve both scalability and simplification of the existing methods to handle large-scale and high-resolution datasets.The need for modification of the hyperparameters to accommodate the remote sensing data parameters frequently demand human intervention.Furthermore, the explainability and the interpretability of the classification choices in DL techniques for remote sensing make use of important yet under-addressed challenges.In order to overcome these hindrances, there is a pressing need exists to develop a method that not only combines the innovative DL architectures but also joins the real hyperparameter tuning plans, thus offering a holistic solution for strong and precise vehicle recognition and classification under various remote sensing atmospheres.
In this background, the current study developed the Intelligent Water Drop Algorithm with a Deep Learning-Driven Vehicle Detection and Classification (IWDADL-VDC) technique to be used upon the RSIs.The goal of the projected technique is to modernize the vehicle recognition and classification processes conducted upon the Remote Sensing Images (RSI) using DL technique.By incorporating innovative DL architectures and executing real hyperparameter tuning plans, the technique seeks to improve adaptability, robustness, and scalability across different environmental states.The IWDADL-VDC model exploits a hyperparameter-tuned DL approach for both recognition and classification of the vehicles.For the vehicle detection process, the IWDADL-VDC technique uses the improved YOLO-v7 model.After identifying the vehicles, the next classification stage is performed using the Deep Long Short-Term Memory (DLSTM) model.In order to improve the classification outcomes of the DLSTM model, the IWDA-based hyperparameter tuning process has been employed in this study.The proposed IWDADL-VDC model was experimentally validated using a benchmark dataset.
Literature works
In literature [11], the improved Chimp optimization algorithm with DL-based vehicle detection and classification (ICOA-DLVDC) algorithm under RSIs was proposed.This algorithm had two different stages such as object detection and classification.The EfficientNet architecture was used in this study for vehicle detection.For classification, the sparse autoencoder (SAE) network was used.The study introduced ICOA that simplifies the parameter tuning process which in turn effectively enhanced the hyperparameters of the SAE.Javadi et al. [12] introduced a DNN using YOLOv3 with different baseline networks such as DenseNet-201, DarkNet53, MobileNetv2, and SqueezeNet.In this study, 3D depth maps were produced by parallax displacement and a pair of aerial images.Then, the FCNN model was trained under 3D feature mappings of trucks, trailers, and semi-trailers.At last, the network force was introduced for detecting the vehicles in aerial images.
Gao et al. [13] presented a vehicle recognition module named binary full convolution one-step object detection (FCOS) and a dataset termed 4MVD for vehicle recognition in the RSIs.During the RPN step, the FCOS module was utilized to generate the candidate box in different images.The dualstep positive and negative example models were implemented to improve the positive plan sampling effect.During the RCNN step, the two-stage classification models were introduced.Ragab et al. [14] devised the Improved DL-assisted Vehicle Detection for Urban Applications employing the RSI (IDLVD-UARSI) method.This algorithm exploited the enhanced RefineDet framework for vehicle detection.The model of classification was incorporated by employing the CAE module.At last, the Quantum-enabled Dwarf Mongoose Optimizer (QDMO) method was exploited in the optimum hyperparameter tuning model.
The foremost aim of the study conducted earlier [15] was two-fold in which the first was the construction of a novel space object image collection.The second aim was to propose a unique RSIC model to achieve high performance classification utilizing the recently-generated data.In general, feature extraction can be performed using the pre-trained MobileNet_V2 and discrete wavelet transform (DWT) methods.In this study, an integration of the DWT and MobileNet_V2 models was used to generate several features.Next, Iterative Neighborhood Component Analysis (INCA) selected the optimum features.Finally, the features selected were fed into SVM for automatic classification.Ahmed et al. [16] developed an IoT-aided smart surveillance model for object recognition through segmentation.Especially, this research encompasses the concepts of IoT, cooperative drones, and DL to improve the surveillance initiatives in smart cities.During segmentation, an AI-based system was introduced in this study by employing the DL-based Pyramid Scene Parsing Network (PSPNet).
Alotaibi and Nagappan [17] presented an automatic vehicle detection and classification method based on chaotic equilibrium optimizer algorithm with a DL (VDTC-CEOADL) model.This technique exploited the YOLO-HR object sensor in which ResNet was used as a support method.Further, the CEOA-based hyperparameter optimizer was introduced for hyperparameter tuning of the ResNet architecture.This technique used the attention-based LSTM (ALSTM) model for the purpose of classification.Gadamsetty et al. [18] presented a technique in which a supervised image detection method was exploited for the categorization of the images, tracked by object recognition, through the YOLOv3 model so as to extract the DCNN features.In general, semantic segmentation and image segmentation can be performed to detect the object's category of all the pixels using class labels.Next, the idea of hashing with SHA-256 was used together with the shipping amount as well as the position of the bounding box for satellite images.
Xie et al. [19] presented a Dense Sequential Fusion (DSF) structure, specifically intended to fuse the camera and LiDAR device data.The main intention of the study was to improve the robustness and accuracy of 3-D object recognition, mainly for distant objects.Alshahrani et al. [20] presented the Artificial Ecosystem Optimizer with DCNN for Vehicle Detection (AEODCNN-VD) technique for the RSIs.The projected AEODCNN-VD method concentrated on the classification of transports in a rapid and precise manner.Ahmed et al. [21] projected the Chicken Swarm Optimizer with Transfer Learning-Driven Vehicle Detection and Classification on the RSI (CSOTL-VDCRS) method to resolve these problems.This method made use of mask-region-based CNN (Mask RCNN) system for vehicle recognition.After observing the transportation in the RSIs, it was then categorized by following the Fuzzy Wavelet Neural Network (FWNN) method.Aljebreen et al. [22] developed the new honey badger optimizer algorithm with an ensemble learning-based vehicle detection and classification (HBOAEL-VDC) system.The main purpose of the research was to project the ensemble DL techniques for precise vehicle classification process.
The research gap in vehicle recognition and identification on the RSIs by employing the DL lies in the restricted survey of hyperparameter tuning plans within the current methods.Though the DL technique has established its importance in terms of precise object detection in the RSI, the optimum outline of hyperparameters that are vital for model performance remains under-addressed.The different and difficult nature of the RSIs need a nuanced method of hyperparameter tuning to safeguard the flexibility and robust generalization.So, it is important to investigate and develop effective hyperparameter tuning methods that achieve precise vehicle recognition from the RSIs.This is crucial in terms of improving the classification accuracy, addressing the tasks like changing environmental states and image resolutions, and finally increasing the reliability and applicability of DL techniques in the area of RSI for vehicle recognition and identification.
Proposed model
In the current research work, the authors present an innovative IWDADL-VDC methodology to be applied on the RSIs.The IWDADL-VDC model exploits the hyperparameter-tuned DL model for detection and classification of the vehicles.To accomplish this, the IWDADL-VDC model follows two major stages such as vehicle detection and classification.Figure 1
Vehicle detection using YOLOv7 object detection network
For the vehicle detection process, the IWDADL-VDC technique uses the improved YOLO-v7 model.Being the baseline method in YOLO series, the YOLO_v7 model accepts plans as protracted effective long-range attention network (E-ELAN), a scaling method that depends on the convolution reparameterization and concatenation-related methods.This model accomplishes a better balance between detection accuracy and efficiency [23].The detection idea followed by the YOLO_v7 model remains the same alike YOLO_v4 and YOLO_v5 models in YOLO series.There exist four modules in YOLO_v7 network namely head, backbone, input, and prediction.The head module comprises of Path Aggregation Feature Pyramid Network (PAFPN) model.The input module is used to scale the input images into even pixel dimensions so as to meet the network requirements.The module comprises of MPConv convolution layer, Bconv convolution layer E, and -ELAN convolution layer while BConv has LeakyReLU activation function, convolution, and Batch Normalization (BN) layers, which are used in the extraction of image data of varying scales.With the emergence of bottom-up path, it is easy to transmit the fundamental data to a higher level, thereby realizing an effective incorporation of the features at dissimilar levels.A forecast element is used to adjust the image networks for P3, P4, and P5 features of dissimilar scales' output through PAFPN via repeg block (REP) and the outcome is passed over 1 × 1 convolutions, which predicts the anchor frame, confidence, and category.In this scenario, the field vehicle detection module must meet the requirements in terms of accuracy and real-time detection.YOLOv7 model has been chosen as the basic model in this study since it strikes a better balance between speed and accuracy of detection.
Initially, many attention mechanism modules of SimAM are embedded into the YOLOv7 network architecture.In general, the Attention module provides dissimilar weights to the input part of the network.The technique disregards the unrelated data and focuses only on the relevant data that could efficiently enhance the feature extraction capability in a complicated background.SimAM is an attention model that does not enhance the parameter count of the network.It is embedded in all the positions of the model and has plug-and-play features.The fundamental objective of SimAM lies in the computation of attention weight using its energy function.SimAM reduces the intervention of complicated background on vehicle recognition by producing a spatial inhibition on the neighboring neurons of the vehicle.This phenomenon highlights the basic features of the vehicle and also improves the capability of extracting those features as given below.
The sigmoid function limits the values of so as to avoid its value from getting larger.Here, the improved mapping feature of the vehicle is ̂, denotes the energy function on the channel.There is a lesser and greater distinction between the adjacent and vehicle neurons. indicates the input vehicle feature map.⊗ shows dot product operation, 2 refers to the variance of all the networks in the input vehicle feature map, denotes the target vehicle neuron, represents the mean value of the network in the input vehicle and implies a super-parameter.
Next, downsampling is the major role of MPConv in which the feature sizes are to be reduced using specific feature loss method.It is to be noted that the dual branches of MPConv model in YOLOv7 exploit the convolution of 3 × 3 kernels for ConV process.During this procedure, some of the features might get lost while an ineffective feature learning might take place in the system, once the step size is 2 .The convolution of 3 × 3 kernels in the branch, below the MPConv layer, is substituted by the focus module based on the focus module in YOLOv5.With the help of a halved feature map, the learning efficacy of the features and accuracy of vehicle detection under complicated backgrounds are enhanced while the loss of features gets reduced.
Classification using a DLSTM model
In this stage, the classification is performed with the help of the DLSTM model.LSTM is a specific Recurrent Neural Network (RNN) that can solve the issue of long-term reliance upon time sequence, gradient withdrawal and an explosion of RNN in the extended series training procedures [24].The storage part of the LSTM strengthens the links between long-and short-term-time sequences.This method is capable of upgrading, preserving and removing the data in the storage unit via three gates namely forget, input, and output.The gate plan holds the data for a long time while it also manages the information movement.When compared with usual RNNs, the LSTM technique accomplishes superior performance in lengthy orders.In this study, the LSTM technique was executed by employing MATLAB 2022 version deep learn- ≈ ing toolbox.The detailed steps of the LSTM technique are mentioned below: Step 1.The forget gate reads the data of ℎ −1 and , and selects either to preserve the data of earlier time via sigmoid function (): Here, refers to forget gate function, signifies the weight, denotes the input at time , ℎ −1 is the earlier intended output and denotes the bias of the forget gate.
Step 2. The input gate adopts the data that is kept in a cell state.Primarily, the upgrade value is defined by sigmoid function whereas a novel candidate value ̃ is produced by tanh function: ̃ = tanh( ⋅ [ℎ −1 , ] + ).
Here, and denote the weights while and refer to biases.Step 3. Upgrade the old cell state via forget and input gates so as to produce an upgraded value: Step 4. The output gate consequence depends on cell state.Primarily, the output value 0 of cell state is defined using the sigmoid function.An upgrade value refers to normalization by tan ℎ function and then multiplied with , and to attain result value (ℎ ) at time : ℎ = × tannℎ( ).
Here, and correspond to weight and bias respectively.Figure 2 demonstrates the architecture of the DLSTM model.
IWDA-based hyperparameter-tuning process
In order to enhance the detection performance of the DLSTM technique, the IWDA-based hyperparameter tuning process has been employed in this study.The range of the IWDA for hyperparameter tuning is accepted based on its exclusive optimization features and suitability for the exact desires of the hyperparameter optimization task.The IWDA is stimulated by the foraging behavior of water drops in nature and is mainly suitable for resolving difficult and non-linear optimizer issues.It displays a fine balance between the exploration and exploitation phases, which is vital for identifying the optimum hyperparameter configurations.It travels the hyperparameter space while it also exploits some of the promising areas that help in stopping the system from getting fixed in local goals.
IWDA is a group intelligence algorithm based on the principles of water drops that interrelate with residue to create a water flow track, once they move [25].The erosion of the riverbed by water movement results in ravines on the riverbed.Water flow is a collection of unit droplets while every individual drop consists of sediment and velocity attributes.Under gravity, once the water droplet chooses the direction with comparatively small resistance, viz., the direction with low sediment, the water droplet takes away more deposition and attains a great speed.Once, two water drops (a) and (b) correspondingly, with similar assets pass over the region, the water droplet carries more sediment and attains a high-velocity increment.
Water drops tend to move thus conferring to discretization in the abstract model and conveying double significant possessions such as the sediment-carrying property soil () and motion attribute, ().Both the possessions change with the flow of water drops.Assume the present state of water drop during movement to the following location so that the water droplet undergoes subsequent changes.
Initially, the water droplet tends to select the path with a lesser number of sediments.Here, probability (, ) denotes that the water droplet chooses ℎ particle at ℎ location as the following location, which is inversely relative to the volume of sediment in (, ) path, and the selection probability is defined as given below.
Once the water droplets travel from th to th position, its velocity changes and this speed increment () is inversely proportional to the sediment content soil (, ) on the running path.
The quantity of sediment transported by water drop flow is equivalent to the quantity of sediment reduction (, ) in the path (, ), as mentioned below: After the water droplet operation, the sediment reduction in the path is inversely proportional to the time required for the water droplet to be passed over path( , ).
𝛥𝑠𝑜𝑖𝑙(𝑖, 𝑗
Here, , and correspond to predetermined parameters and time taken for a water drop to transfer from th position to th position as given below: In Eq (17), the heuristic concerning the road segment (, ) is denoted by (, ).
Once the water droplet arrives at th position from th location, the sediment in (, ) is updated to provide the feedback on course planning of other water droplets.
𝑠𝑜𝑖𝑙(𝑖, 𝑗) = (1 − 𝑝) • 𝑠𝑜𝑖𝑙(𝑖, 𝑗) − 𝑝 • 𝛥𝑠𝑜𝑖𝑙(𝑖, 𝑗).
( Here, the coefficient lies in the range of [0 and 1].Every drop finishes its path planning from the beginning to the endpoint through the steps (1) to ( 4); consider that the course of ℎ droplet is , and it uses evaluation function () to choose the optimum way in the droplet group , as, To make the optimum path, the planning process have controlling effects on succeeding course planning.This further enhances the capability of water droplets to find the optimum path and is essential to form a feedback model for updating the global sediment volume on optimum path.
𝑠𝑜𝑖𝑙(𝑖, 𝑗) = (1 − 𝜌)𝑠𝑜𝑖𝑙(𝑖, 𝑗) + 𝜌 2𝑠𝑜𝑖𝑙(𝐼𝑊𝐷) 𝑁 𝑙𝐵 (𝑁 𝑙𝐵 −1)
. ( In Eq (20), the updated parameter within [0,1] is and the node count for the path is .Fitness selection is an extensive factor that influences the performance in IWDA methodology.The hyperparameter selection method contains a solution encoding model to estimate the efficiency of candidate solutions.In this study, the IWDA model reflects 'correctness' as the chief measure to project the fitness function formulated below.
In the expressions given above, TP denotes True Positive value and FP represents false positive values.
Experimental validation
The current section details about the performance validation of the IWDADL-VDC technique using two datasets such as the VEDAI dataset [26] and ISPRS Postdam dataset [27].The VEDAI dataset includes 3,687 samples under nine classes.Similarly, the ISPRS Postdam dataset includes 2244 samples under four classes.Figure 3 represents some of the sample detected images.The vehicle classification results accomplished by the IWDADL-VDC system under the VEDAI dataset are shown in Table 1 and Figure 5.The achieved outcomes display that the IWDADL-VDC system achieved an increase in its performance at each class.Upon 70% of TRPH, the IWDADL-VDC technique achieved an average of 99.68%, of 96.93%, of 95.95%, of 96.27%, and an MCC of 96.17%.Also, based on 30% of TSPH, the IWDADL-VDC technique provided an average of 99.50%, of 94.29%, of 92.50%, of 92.95%, and an MCC of 92.89%, respectively.Additionally, it is made clear that the proposed model consistently enhanced its performance in terms of training and testing over increasing number of epochs.This outcome displays the capacity of the model to identify and learn patterns within training and testing databases.The augmentation analysis suggests that the model not only varies to the training data but also exceeds to make predictive on prior unnoticed data, thus emphasizing its powerful generalization abilities.The training loss progressively diminishes as the model increases its weights to lessen the classification errors at both testing and training databases.These loss curves perfectly represent the appropriate alignment of the model with that of the training data.Further, it also highlights the model's capability to adept at holding patterns on these databases.This is a valuable inference to note that the IWDADL-VDC method frequently improves its parameters to reduce the discrepancies between the actual and the predicted training labels.Figure 8 shows the outcomes of a brief comparison study conducted upon the IWDADL-VDC technique using the VEDAI dataset [11].The attained results highlight that the IWDADL-VDC technique achieved superior performance with a maximum of 99.68%.Conversely, the existing models such as the ICOA-DLVDC, CSOTL-VDCRS, LeNet, AlexNet, and the VGG-16 algorithms achieved the least values such as 99.50%, 98.07%, 79.74%, 88.98%, and 94.46%, respectively.The outcomes from the vehicle classification analysis conducted upon the IWDADL-VDC system using the ISPRS Postdam dataset are shown in Table 2 and Figure 10.The accomplished values infer that the IWDADL-VDC system gets boosted achieved supreme performance under all the classes.Upon 70% of TRPH, the IWDADL-VDC system achieved an average of 99.87%, of 97.27%, of 99.75%, of 98.47%, and an MCC of 98.18%.Besides, with 30% of TSPH, the IWDADL-VDC technique provided an average of 99.70%, of 97.95%, of 96.50%, of 97.11%, and an MCC of 96.41%, correspondingly.Figure 13 shows the extensive comparison analysis outcomes achieved by the IWDADL-VDC system upon the ISPRS Postdam dataset.The attained outcomes display that the IWDADL-VDC technique achieved an increased performance with a maximum of 99.87%.Alternatively, the ICOA-DLVDC, CSOTL-VDCRS, LeNet, AlexNet, and the VGG-16 methodologies achieved the least values such as 99.70%, 98.67%, 94.54%, 95.86%, and 89.54%, correspondingly.Thus, it can be inferred that the IWDADL-VDC technique can be utilized for an accurate and automated vehicle detection process.
Conclusions
In the current research work, the authors have developed a new vehicle detection and classification model named IWDADL-VDC technique to be applied on the RSIs.The IWDADL-VDC methodology exploits the improved YOLO-v7 object detection with DLSTM classifier and IWDAbased hyperparameter tuning process.The IWDADL-VDC model was experimentally validated using two benchmark datasets and the results attained by the IWDADL-VDC technique were promising compared to other recent approaches.The proposed IWDADL-VDC technique achieved the maximum accuracy values such as 99.68% and 99.87% under VEDAI and the ISPRS Postdam datasets, respectively.The future works in vehicle recognition and identification can concentrate on the incorporation of innovative DL designs, like transformer-based methods, to improve the extraction of features and classification accuracy.In addition to this, when the application of multi-modal sensor fusion by uniting the data from cameras, LiDAR, and radar is explored further, then it may additionally enhance the robustness of vehicle recognition methods, especially in challenging environmental states.Further research could also examine real-time execution for dynamic traffic states and explore the possibility of edge computing to reduce the latency in decision-making methods.
depicts the workflow of the IWDADL-VDC technique.
Figure 4
Figure 4 shows the classifier analysis outcomes of the IWDADL-VDC system upon the VEDAI dataset.Figures 4a and b reveal the confusion matrices generated by the IWDADL-VDC model with 70:30 of training phase (TRPH)/testing phase (TSPH).The figure infers that the IWDADL-VDC methodology can exactly categorize and recognize all the nine classes.Furthermore, Figure 4c displays the PR analysis outcomes of the IWDADL-VDC method.The figure exhibits that the IWDADL-VDC methodology achieved an excellent PR performance under each class.Also, Figure 4d demonstrates the ROC analysis outcomes achieved by the IWDADL-VDC method.This figure indicates that the IWDADL-VDC model prompts successful outcomes with higher ROC values on diverse classes.
Figure 5 .
Figure 5. Vehicle classification analysis outcomes of the IWDADL-VDC model under the VEDAI dataset.
Figure 6
Figure 6 shows the training and validation analysis curves achieved by the IWDADL-VDC system upon the VEDAI dataset.The figure offers knowledgeable insights about the effectiveness of the IWDADL-VDC algorithm over the increasing number of epochs.Two curves show essential insights about the capabilities and learning evolution of the model for generalization.
Figure 6 .
Figure 6. curve of the IWDADL-VDC model under the VEDAI dataset.
Figure 7
Figure7shows an extensive view of the IWDADL-VDC approach with the VEDAI dataset in terms of training and testing loss values for the IWDADL-VDC system at multiple number of epochs.The training loss progressively diminishes as the model increases its weights to lessen the classification errors at both testing and training databases.These loss curves perfectly represent the appropriate alignment of the model with that of the training data.Further, it also highlights the model's capability to adept at holding patterns on these databases.This is a valuable inference to note that the IWDADL-VDC method frequently improves its parameters to reduce the discrepancies between the actual and the predicted training labels.
Figure 8 .
Figure 8. analysis outcomes of the IWDADL-VDC model under the VEDAI dataset.
Figure 9
Figure 9 displays the classifier analysis outcomes of the IWDADL-VDC system when using the ISPRS Postdam dataset.Figures 9a and b show the confusion matrices generated by the IWDADL-VDC technique with 70:30 of TRPH/TSPH.The figure infers that the IWDADL-VDC methodology can accurately categorize and recognize all the four classes.Moreover, Figure 9c represents the PR analysis outcomes of the IWDADL-VDC algorithm.The figure reveals that the IWDADL-VDC methodology achieved a remarkable PR performance.Further, Figure 9d shows the ROC analysis results of the IWDADL-VDC model.The figure indicates that the IWDADL-VDC model accomplished efficacious outcomes with higher ROC values under different classes.The outcomes from the vehicle classification analysis conducted upon the IWDADL-VDC system using the ISPRS Postdam dataset are shown in Table2and Figure10.The accomplished values infer that the IWDADL-VDC system gets boosted achieved supreme performance under all the classes.Upon 70% of TRPH, the IWDADL-VDC system achieved an average of 99.87%, of 97.27%, of 99.75%, of 98.47%, and an MCC of 98.18%.Besides, with 30% of TSPH, the IWDADL-VDC technique provided an average of 99.70%, of 97.95%, of 96.50%, of 97.11%, and an MCC of 96.41%, correspondingly.
Figure 10 .
Figure 10.Average analysis outcomes of the IWDADL-VDC algorithm under the ISPRS Postdam dataset.
Figure 11
Figure11displays the training and validation curves of the IWDADL-VDC technique when using the ISPRS Postdam dataset.The figure provides highly commendable insights about the effectiveness of the IWDADL-VDC system over multiple epochs.The two curves correspond to some crucial insights into the capabilities as well as the learning progression of the model for generalization.Moreover, it is apparent that the model is a consistent performer with enhanced training and testing values over increasing number of epochs.This further represents the capacity of the model to learn and identify the patterns within training and testing databases.The enhancement testing suggests that the model not only modifies the training data but also surpasses to make predictions on the previously disregarded data, thus emphasizing its powerful generalization abilities.
Figure 11 .
Figure 11. curve of the IWDADL-VDC system under the ISPRS Postdam dataset.
Figure 12
Figure 12 demonstrates an extensive view of the results achieved by the IWDADL-VDC methodology under the ISPRS Postdam dataset, training and testing loss values at different number of epochs.There was a gradual decline observed in the training loss as the model enhanced its weights to lessen the classification errors for testing and training datasets.These loss curves offer a better representation of the model's alignment with that of the training data and also emphasize its capacity for competently holding patterns on these databases.This assessment outcomes infer that the IWDADL-VDC algorithm frequently improves its parameters to reduce the discrepancies between the prediction and training labels.
Figure 12 .
Figure 12.Loss curve of the IWDADL-VDC approach under the ISPRS Postdam dataset.
Figure 13 .
Figure 13. analysis outcomes of the IWDADL-VDC algorithm under the ISPRS Postdam dataset.
Table 1 .
Vehicle classification analysis outcomes of the IWDADL-VDC system under the VEDAI dataset.
Table 2 .
Vehicle classification analysis outcomes of the IWDADL-VDC model under the ISPRS Postdam dataset. | 6,928.6 | 2024-01-01T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
The occurrence of Natsushima bifurcata ( Polychaeta : Nautiliniellidae ) in Acharax hosts from mud volcanoes in the Gulf of Cadiz ( south Iberian and north Moroccan Margins )
Miura and Laubier (1990) used the names Nautiliniellidae and Nautiliniella to replace the homonyms Nautilinidae and Nautilina (Miura and Laubier, 1989), because the genus name had already been occupied by a cephalopod mollusc. At present, this small family of uncommon polychaetes includes fifteen species ascribed to eleven genera found in the mantle cavity of deep-sea bivalve molluscs of the families Solemyidae, Mytilidae, Thyasiridae and Vesicomyidae (Miura and Hashimoto, 1996; Dreyer et al., 2004). Nautiliniellids are mainly restricted to Pacific hydrothermal vents and cold seeps (Blake, 1993; Miura and Hashimoto, 1993, 1996; Miura and Laubier, 1989, 1990; Miura and Ohta, 1991). Up to now, only two species, Petrecca thyasira Blake, 1990, found in the gill filaments of Thyasira insignis SCIENTIA MARINA 71(1) March 2007, 95-100, Barcelona (Spain) ISSN: 0214-8358
specimens from the Laurentian Fan, and Vesicomyicola trifurcatus Dreyer, Miura and Van Dover, 2004, found in the mantle cavity of the vesicomyid clams from the Blake Ridge cold seep off the coast of South Carolina, are known to occur in the Atlantic Ocean (Blake, 1990;Dreyer et al., 2004).Nautiliniellids are commonly quoted as commensals or parasites (Blake, 1990;Miura and Laubier, 1990) but the nature of the association between the polychaetes and their bivalve hosts is still unclear.
Communities of benthic animals associated with cold seeps are known from several locations on active and passive continental margins of the Pacific and have recently been discovered also in the Atlantic Ocean (Olu-Le Roy et al., 2004).The occurrence of mud volcanism, cold seepage, hydrocarbon venting and gas hydrates in the Gulf of Cadiz has been intensively investigated since 1996 (e.g.Pinheiro et al., 2003;Van Rensbergen et al., 2005).Within the framework of the UNESCO/IOC Training Through Research Programme, seven research cruises were conducted for this specific purpose: TTR9 (1999), TTR10 (2000), TTR11 (2001), TTR12 (2002), TTR14 (2004), TTR15 (2005) and TTR16 (2006).
The solemyid bivalve Acharax sp. is one of the most common species of the chemoauthosynthesisbased assemblage of the mud volcanoes from the Gulf of Cadiz (Rodrigues and Cunha, 2005).In three of these mud volcanoes some of the bivalve specimens were found to host in their mantle cavity nautiliniellid polychaetes identified as Natsushima bifurcata Miura and Laubier (1990), previously known only from Sagami Bay (Japan).These recently collected specimens verify the original concept of the species and provide the first record of N. bifurcata for the North Atlantic.
Study site
The Gulf of Cadiz is located at the crossroads of the European and African Atlantic margins and the Mediterranean.The compression between the Eurasian and African tectonic plates creates an interesting geophysical template shaped by volcanic activity and by the interaction between the topography and the circulation of the Atlantic and Mediterranean Ocean Waters.Geologically, the set-ting of the Gulf of Cadiz is extremely complex and still under debate but one of the most important structures is a large olistostrome complex emplaced in an accretionary wedge-type environment.(Sartori et al., 1994;Maldonado et al., 1999;Gutscher et al., 2002).Since the discovery of the first mud volcano in 1999, about 30 other sites at depths ranging from 200 to 4000 m, with varying degrees of hydrocarbon-rich gas seepage activity, have been located and sampled (Pinheiro et al., 2003;Van Rensbergen et al., 2005).
Collection of samples
Samples were collected during the TTR cruises (Training Through Research Programme, IOC-UNESCO) onboard the RV Prof. Logachev.A TVassisted grab was used to locate interesting sampling sites in the target mud volcanoes.Whenever Acharax specimens were collected, usually from sites within the crater of active mud volcanoes, they were opened and examined onboard for the presence of nautiliniellid polychaetes.The biological material was preserved in 70 or 96% ethanol (the latter will be used for future molecular analysis).
Diagnosis.Body vermiform, flattened ventrally and arched dorsally, with fairly uniform width throughout.Short prostomium, much wider than long, slightly incised anteriorly, with only a pair of lateral antennae and without eyes.Tentacular segment partially fused to prostomium, with dorsal and ventral cirri, neuroaciculae, and few neurosetae.Parapodia sub-biramous, with well-developed dorsal and ventral cirri.Notopodia short conical, similar throughout the body, with slender notoacicula and without setae.Neuropodia cylindrical, with neuroacicula and numerous setae of two kinds -up to 4 (usually 3) stout slightly curved hooks placed more dorsally and more than 100 smaller bifurcate setae placed ventrally.
Supplementary description.Largest specimens measure 17 mm in length and ~3 mm in width, for 72 setigers.Colour in life pink, and after preservation white.
Remarks.The original description of the holotype mentions that the specimens were found inside Solemya sp.bivalves (Miura and Laubier, 1989: 322) but the identification of the bivalve was corrected to Acharax johnsoni in a subsequent work by Ohta (1990, cited in Miura andHashimoto, 1996: 266).
The specimens from the Gulf of Cadiz are larger than the ones from the type locality (Table 1) and in some neuropodia (Fig. 2) there are 4 hooks (up to 3 in the original description).The diagnosis of the species was updated accordingly.
Ecology.The host bivalve, Acharax sp. has a wide distribution in the Gulf of Cadiz: it has been recorded from ten mud volcanoes in the Moroccan and Portuguese margins at depths varying from 358 to 3902 m.The nautiliniellid polychaetes were found in the mantle cavity, near the gill filaments, of Acharax specimens collected from the Jesús Baraza, Yuma and Ginsburg mud volcanoes, all located in the Western Moroccan field within a bathymetric range restricted to 920 and 1105 m (Fig. 3).At these three mud volcanoes, observed infestation rates varied from 12.5 to 75% and the number of nautiliniellid individuals found in each bivalve varied from 1 to 3 (Table 2).The infested Acharax specimens measured between 3.2 and 6.9 cm (total length).Miura and Laubier (1990).
iellids (up to 75% infestation rate).The majority (69.8%) of the 96 Acharax specimens were collected at the four shallowest mud volcanoes (358-701 m) but none of these specimens were infested.The populations from the three deepest mud volcanoes (1115-3902 m) were also not infested but the number of specimens collected was much lower (only 4.2% of the total).According to Neulinger et al. (2006), Acharax species differentiation based on shell morphology is likely to underestimate true species diversity within this taxon.A phylogenetic study based on some Acharax specimens from localities of the Pacific and Indian oceans (Aleutian Trench, off Oregon, Costa Rica, and Peru margins, and off Makran, and Java) resulted in two clusters that group populations located far apart (Makran, Oregon, and Peru in one cluster and Java, Aleutian Trench and Costa Rica in the other).These authors propose that the specimens are representatives of at least 2 different species and do not all belong to A. johnsoni, as assumed previously.
These results emphasize the need for further morphological and probably also genetic studies to solve taxonomic affinities within Acharax populations.Until then, we cannot assume that the Gulf of Cadiz and Sagami Bay specimens belong to the same species but nor can we discard this possibility.The same applies to the Natsushima specimens.We could not find morphological evidence to support the establishment of a new species.The comparison with the holotype only revealed differences in the proportions of the specimens (Fig. 2; Table 1).The largest specimens from the Gulf of Cadiz are 2 to 3 times larger than the holotype, the setigers are broader and the parapodia, including cirri, are thicker.The holotype seems to have proportionally longer and thinner dorsal cirri and antennae, and it is also smaller.This different appearance is due to the more corpulent body of the larger specimens, resulting in proportionally broader and shorter cirri.In support of this, medium-sized specimens, such as the DBUA 00765 from the Gulf of Cadiz, and the USNM 172135 from Sagami Bay, show "intermediate" morphological features between the larger, more corpulent specimens and the holotype.It is possible that N. bifurcata from Sagami Bay and the specimens from Gulf of Cadiz are cryptic species but this question can only be answered with more knowledge on reproductive patterns and by the analysis of DNA sequences.As the preservation of the Japanese specimens does not allow DNA analysis, we must consider that both the Cadiz and the Sagami specimens belong to the same morphological species.
There are several cases of specimens from distant locations of other deep sea species that are ascribed to the same morphological species.An example is the commensal polychaete Branchipolynoe seepensis Pettibone, 1986 that was recorded in several bivalve host species both in the Gulf of Mexico and in the mid-Atlantic Ridge.In this case, subsequent studies carried out by Chevaldonné et al. (1998) determined the genetic divergence for the COI and 16S rDNA genes and showed that the Atlantic and Pacific populations should be considered as two isolated phylogenetic species.
Distribution.East Pacific, Sagami Bay: Hatsushima and Okino-yama cold seeps, 1114-1170 m, inside the mantle cavity of Acharax johnsoni Dall, 1891.North Atlantic, Gulf of Cadiz: Jesús Baraza, Yuma and Ginsburg mud volcanoes, 920-1105 m, inside the mantle cavity of an undetermined Acharax species.DISCUSSION The genus Natsushima comprises at present two described species, N. bifurcata Miura and Laubier, 1990 and N. graciliceps Miura and Hashimoto, 1996.The two species differ by the presence of respectively short conical and elongated notopodia in the middle segments, by the presence of a fine embedded notoacicula in the former and by the different size of dorsal and ventral cirri, which are smaller in the latter.Both Sagami Bay specimens of N. bifurcata occur in cold seeps associated with Solemyid bivalves at depths of around 1000 m, and the specimens recently collected in the Gulf of Cadiz corroborate this information.In fact, it is noteworthy that despite the wider bathymetric range of Acharax in the Gulf of Cadiz (358-3902 m), up to now only the populations from the mud volcanoes of the Western Moroccan field, located at depths of 920-1105 m, were found to be infested by nautilin-SCI.MAR., 71(1), March 2007, 95-100.ISSN: 0214-8358 98 • A. RAVARA et al.
FIG. 3. -Map of the study area: full triangles -sampled sites; empty triangles -other known mud volcanoes; full circles -sites with Acharax populations; empty circles -sites with infested Acharax populations.JB, Y and G -Jesús Baraza, Yuma and Ginsburg mud volcanoes.
TABLE 2 .
-Infestation rates; number of specimens of Natsushima bifurcata and Acharax sp.collected in each mud volcano (Number of N. bifurcata in each host Acharax in brackets). | 2,408 | 2007-03-30T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Mobile App–Reported Use of Traditional Medicine for Maintenance of Health in India During the COVID-19 Pandemic: Cross-sectional Questionnaire Study
Background India follows a pluralistic system for strategic and focused health care delivery in which traditional systems of medicine such as Ayurveda, yoga and naturopathy, Unani, Siddha, Sowa Rigpa, and homoeopathy (AYUSH) coexist with contemporary medicine, and this system functions under the Ministry of AYUSH (MoA). The MoA developed a mobile app, called AYUSH Sanjivani, to document the trends of the use of AYUSH-based traditional and holistic measures by the public across India. Analysis of the data generated through this app can help monitor the extent of the use of AYUSH measures for maintenance of health during the COVID-19 pandemic and aid effective health promotion and communication efforts focused on targeted health care delivery during the pandemic. Objective The purpose of the study was to determine the extent of use of AYUSH measures by the public in India for maintenance of health during the COVID-19 pandemic as reported through the AYUSH Sanjivani mobile app. Methods Cross-sectional analysis of the data generated through the Ayush Sanjivani app from May 4 to July 31, 2020, was performed to study the pattern and extent of the use of AYUSH-based measures by the Indian population. The responses of the respondents in terms of demographic profile, use pattern, and benefits obtained; the association between the use of AYUSH-based measures and symptomatic status; and the association between the duration of use of AYUSH-based measures and the outcome of COVID-19 testing were evaluated based on bivariate and multivariate logistic regression analysis. Results Data from 723,459 respondents were used for the analysis, among whom 616,295 (85.2%) reported that they had been using AYUSH measures for maintenance of health during the COVID-19 pandemic. Among these 616,295 users, 553,801 (89.8%) either strongly or moderately agreed to have benefitted from AYUSH measures. Ayurveda and homeopathic measures and interventions were the most preferred by the respondents across India. Among the 359,785 AYUSH users who described their overall improvement in general health, 144,927 (40.3%) rated it as good, 30,848 (8.6%) as moderate, and 133,046 (40.3%) as slight. Respondents who had been using AYUSH measures for less than 30 days were more likely to be COVID-19–positive among those who were tested (odds ratio 1.52, 95% CI 1.44-1.60). The odds of nonusers of AYUSH measures being symptomatic if they tested positive were greater than those of AYUSH users (odds ratio 4.01, 95% CI 3.61-4.59). Conclusions The findings of this cross-sectional analysis assert that a large proportion of the representative population practiced AYUSH measures across different geographic locations of the country during the COVID-19 pandemic and benefitted considerably in terms of general well-being, with a possible impact on their quality of life and specific domains of health.
Introduction
Coronaviruses, a large family of single-stranded RNA viruses, can infect animals and humans, causing respiratory, gastrointestinal, hepatic, and neurologic diseases [1]. To date, 6 human coronaviruses (HCoVs) have been identified, including the alpha coronaviruses HCoVs-NL63 and HCoVs-229E and the beta coronaviruses HCoVs-OC43, HCoVs-HKU1, and severe acute respiratory syndrome coronavirus (SARS-CoV) [2]. New coronaviruses appear to emerge periodically in humans owing to the high prevalence and wide distribution of coronaviruses, the large genetic diversity and frequent recombination of their genomes, and the increase of human-animal interface activities [3]. The first case of COVID-19 in India was reported on January 31, 2020 [4]. The World Health Organization observed that with appropriate integration, traditional medicine would be a significant option to balance curative services with preventive care, which can help address the unique health challenges of the 21st century [5].
Clinical evidence from a study on the effects of Chinese traditional medicine in the treatment of SARS-CoV-2 demonstrated significant results, and the study proposed that herbal medicine has a beneficial effect in the treatment and prevention of epidemic diseases [6]. A Cochrane systematic review in this area reported that herbal medicine combined with western medicine may improve symptoms and quality of life in SARS-CoV patients [7]. The National Health Commission in China has declared the use of herbal medicine combined with contemporary medicine as a treatment for COVID-19 and has issued many guidelines on herbal medicine-related therapy [8]. The acronym AYUSH stands for Ayurveda, yoga and naturopathy, Unani, Siddha, and homeopathy; these indigenous systems of medicine are practiced in India under the Ministry of AYUSH (MoA). Considering the present scenario and penetration of the AYUSH system into the mainstream health care system in India for preventive and curative purposes, the MoA released an advisory to the public for maintenance of general health and well-being during the COVID-19 pandemic on March 6, 2020 [9]. Although India is a country that follows a pluralistic approach to health care, data regarding the use of traditional systems of medicine or health-seeking trends of people are not available in the public domain. There are reports in the press regarding the use of AYUSH prophylactic measures for COVID-19 [10] as well as for lifestyle and other diseases; however, the extent of their use and the outcomes and benefits obtained are not known. Health care delivery, as well as research in times of natural disasters and epidemics or pandemics, is challenging [11]. The concept of infodemiology has evolved significantly with the ever-increasing penetration of the internet in society and is being efficiently being used to nowcast epidemics, quantify the different trends in epidemics, and document and synthesize data on the use of health care services and other public health-related issues [12,13].
The government of India has taken the initiative to use and integrate the preventive, curative, and rehabilitative potential of AYUSH systems of medicine to strengthen the health care delivery system, and the AYUSH Sanjivani app was developed through a consultative process among experts in the field of AYUSH and information technology (IT) by the MoA to record the patterns and trends of the use of preventive measures adopted by the public to enhance immunity and maintain health during the COVID-19 pandemic. The AYUSH Sanjivani app was intended to motivate and persuade users to achieve a status of healthy well-being while thwarting the tendency of the masses to use untested and unproven remedies or over-the-counter or self-prescription measures, especially when faced with the threat of the pandemic and the physical, physiological, social and economic ramifications of the containment measures required of the public.
Through recent initiatives in smart devices, mobile apps have become a convenient, easy-to-use, and less time-consuming method to generate data from the public. Self-reported health status and health care service use are indispensable indicators to assess the performance and attitude of any health system in the absence of recorded health administration data [14]. An app-based survey has advantages such as wider population access, better response rates, lower cost, ease of analysis, ease of use for participants, assurance of user anonymity and preferences, greater flexibility, and faster data synthesis compared to traditional epidemiological and surveillance methods. Various previous research studies in the field of mobile-based health apps and the adoption of information technology have identified individual preferences and motivations to use these apps based on socioeconomic characteristics, demographics, access to health care facilities, perceptions about the usefulness of the apps, and the effect of existing or perceived disease conditions [15][16][17][18].
Hence, a cross-sectional analysis of the data generated from the app was performed to determine use trends of AYUSH measures by the public during the COVID-19 pandemic.
The primary objective of the cross-sectional analysis was to determine the extent of use of AYUSH advocacies and measures Secondary objectives were to compare the self-reported incidence of COVID-19 and symptomatic status of the respondents affected with COVID-19 among the users of AYUSH measures as compared to nonusers and to determine the pattern of use of AYUSH measures by users across India. Perceived change in general well-being in terms of appetite, bowel habits, sleep, stamina, and mental well-being among users of AYUSH measures, the relationship between the duration of the use of AYUSH measures and the incidence of COVID-19, and the relationship between the symptomatic status of COVID-19-positive respondents among users and nonusers of AYUSH-based measures were also included as secondary objectives.
Study Design
This is a cross-sectional analysis of data generated through the AYUSH Sanjivani mobile app. The MoA launched the AYUSH Sanjivani app to generate data on the acceptance and use of AYUSH advocacies and measures by the population and its possible impact on the maintenance of health during the COVID-19 pandemic. The content of the app is a self-reporting questionnaire intended for the public to report their preferences, patterns, and trends of use of the measures circulated through the AYUSH advisory released by the MoA for maintenance of health during the pandemic. Self-perceived impact on improvement in general health and the benefits of using AYUSH measures during the COVID-19 pandemic by the respondents were also recorded in the app. We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines when reporting the findings [19].
Informed Consent and Ethical Consideration
The study was approved by the Central Ethics Committee of Central Council for Research in Ayurvedic Sciences, Ministry of AYUSH, India (1-12/2020-CARICD/Tech/CEC). Upon downloading the app, before voluntary consent was obtained, the user was informed that the information they entered would only be used for research purposes and that their anonymity and confidentiality would be maintained. The users were also informed that by choosing to provide information in this app, they were making a valuable contribution to public health research in the country. It was made explicitly clear that by participating in the survey, users were voluntarily giving their consent to use the data for research purposes.
Study Setting
The app was released through the Google Play store in May 2020 and was available for download across India. The data generated from respondents across all States and Union Territories of India during the period from May 4 to July 31, 2020, were used for the analysis.
Participants
All the residents of India who possessed a smartphone, tablet, or other such device and who were willing to download the app and voluntarily provide the requisite information to the questionnaire in either English or Hindi were eligible to participate in the study. The primary respondents were health seekers of AYUSH or their families, who preferred AYUSH systems for preventive or curative purposes; health seekers who sought consultation at the outpatient department of a national institution, research council, college, hospital, or primary or secondary health care facility across the country; and members of the public who were motivated to use AYUSH measures for maintenance of health during the COVID-19 pandemic. These beneficiaries were sensitized to the AYUSH measures through interaction with health professionals.
Data Sources and Data Collection Methods
The AYUSH Sanjivani app was conceived to motivate and persuade the public to achieve a state of general well-being during the COVID-19 pandemic while documenting the patterns and trends of the use of AYUSH systems in India. The app was also announced and promoted through social media platforms such as the Twitter accounts, Instagram, and Facebook pages of MoA, AYUSH institutions, and hospitals, as well as AYUSH professionals and students, which enabled a wider reach among the general public. The AYUSH Sanjivani app was available for the Android and iOS (Apple) platforms and was made available through the App Store and Google Play store, and the questionnaire was drafted in a simple, easily comprehensible manner, initially launched in English and later rolled out in Hindi according to the World Health Organization guidance for translation and adaptation of instruments [20]. The app started with a COVID-19 guide for users that elaborated the importance of AYUSH for health, the need for self-care, and general and AYUSH measures to practice to improve immunity and maintain health. The Welcome screen of the app is shown in Multimedia Appendix 1.
The app contains three modules to report the desired data (Multimedia Appendix 2). The first module comprises a questionnaire for capturing the trends of use of AYUSH measures among the public across different geolocations. The second module is intended to capture the use trends of AYUSH measures by physicians. The third module aims to garner the cumulative data of health seekers and beneficiaries who were advised to use AYUSH preventive measures by their physicians. This paper focuses on the data collected through the first module of the app, which pertains to use trends of AYUSH preventive measures self-reported by the public. The questionnaire in the app was finalized by collaborative discussions with experts, followed by iterative refining, and included multiple choice questions, modified Likert scales, and yes/no questions.
There were no open-ended questions. The questions were drafted in a simple, easy-to-understand format in English and Hindi so that they were comprehensible to people with basic education and knowledge of how to use smartphones.
The questionnaire was subdivided into three different layers. The first layer captured basic sociodemographic characteristics, such as gender, and the geographical location of the respondents.
A question on whether they were using AYUSH measures (advised by the MoA or State governments, or other AYUSH measures) or not using any AYUSH measures was also included to classify the respondents into two categories (users or nonusers of AYUSH measures). Those who responded that they are using or have used AYUSH measures during this pandemic were asked three additional questions. The first question was related to the duration of use of AYUSH measures, and the second question captured the opinions of the respondents on whether the practice of AYUSH measures had benefitted them. The third question was intended to capture the possible reasons for finding AYUSH beneficial, reported as per the respondent's experiences.
The second layer of the app contained another four questions, which could be answered by all respondents irrespective of whether they were using AYUSH measures. The questions were intended to capture the information such as the respondent's occupation, presence or absence of any pre-existing disease, the risk of contracting COVID-19 ("at risk" was categorized as being in quarantine, a health care worker treating COVID-19 patients at hospitals or in communities, a general public official implementing lockdown, or a primary contact of a COVID-19-positive patient). The COVID-19 test status was elicited through a separate question in which the respondent was required to select any of the options of tested positive and asymptomatic, tested positive and symptomatic, tested negative, and never tested. The respondents were required to furnish data related to their COVID-19 status if they underwent testing either on their own or on medical advice.
The third layer of the app was accessible to only those respondents who were using AYUSH measures, and the questions included in it pertained to the use trends of measures advised under various AYUSH systems. This layer contained another set of six questions, namely duration of intake of AYUSH measures, the regularity of intake, self-perceived improvement in parameters of well-being (appetite, bowel habits, sleep, stamina, and mental well-being) or no improvement after use of AYUSH, and the onset of any influenza-like-illness symptoms. The respondent's self-perceived impact on their general health was also captured (see the detailed questionnaire in Multimedia Appendix 3).
Outcome Measures
The primary outcome was to measure the extent of use of AYUSH measures by the respondents who reported they used or did not use AYUSH measures during the COVID-19 pandemic. Further, the patterns and extent of use were assessed as distributions across sociodemographic characteristics such as geographical location, gender, urban or rural location, and occupation.
Secondary outcomes were to compare the incidence of COVID-19 among the respondents who did or did not use AYUSH measures, the pattern of use in terms of duration, regularity, use trends across different AYUSH systems, and the extent of benefits received assessed through a 5-point Likert scale ranging from strongly agree to strongly disagree. The reasons for finding AYUSH prophylactic measures beneficial in terms of responses were categorized as an overall feeling of good health, reducing the severity of symptoms while having COVID-19, or improvement in other minor ailments; these were also evaluated as a secondary outcome. Another secondary outcome was the overall improvement in the general health of the respondents based on the responses ranging from "no change" to "excellent improvement." The change in parameters of well-being categorized as "improved or no change" elicited individually for all the parameters was also a secondary outcome. The association between the symptomatic status of respondents affected by COVID-19 and use or nonuse of AYUSH measures was evaluated. The association between the duration of use of AYUSH measures and incidence of COVID-19 was also evaluated as a secondary outcome.
Bias
The app was promoted across social media platforms and through AYUSH institutions across the country for wider reach among all geographic and socioeconomic strata. However, the data are not representative of the population, as the respondents were restricted to people who are smartphone users and are more active on the web as well as those who already follow the MoA and other AYUSH-related pages on social media platforms. Moreover, the proportion of nonusers was much smaller compared to that of users; hence, the findings may have limited generalizability. The possibility of information bias could also not be completely ruled out, as the information provided in the app was retrospectively obtained, such as frequency of use, regularity of use, type of medicines used with the duration of use, etc.
Study Size
Data from 723,459 respondents collected from May 4 to July 31, 2020, through the AYUSH Sanjivani mobile app was used for this cross-sectional analysis.
Statistical Analysis
The qualitative data received through the app were imported into Excel (Microsoft Corporation), where they were numerically coded. Numerical codes were assigned to all the options for each question in the questionnaire. This coded Excel file was then imported into STATA 16.1 (StataCorp LLC) and used for statistical analysis. Descriptive statistics for categorical data were reported using frequencies and percentages.
Comparisons among users and nonusers of AYUSH measures were performed using the chi-square test in terms of respondents being tested or not for COVID-19, the outcome of COVID-19 testing, the symptomatic status of the COVID-19-positive respondents, the risk status of the respondents, and the presence or absence of comorbid conditions. Logistic regression analysis was performed to compute the crude odds ratio by measuring the association between the duration of use of AYUSH-based measures and the outcome of COVID-19 testing (positive or negative). The association between the use of AYUSH-based measures and the symptomatic or asymptomatic status of COVID-19-positive respondents was also evaluated. Adjusted odds ratios considering the risk status and presence of comorbidities as confounders were also computed. A P value of <.05 was considered significant.
Use Trends Among Different Streams During the COVID-19 Pandemic
The number of respondents who reported using Ayurveda measures was 90,357/433,560 (21.0%), Homeopathy was used by 47,639/433,560 (11.0%), while a small proportion reported the use of Unani and Siddha Interventions. It is intriguing to note that 291,251/433,560 (67.0%) of the users reported having been using yoga, pranayama, meditation, or the use of home remedies such as spices in cooking, drinking warm water, steam inhalation, and other such practices for maintenance of health (Multimedia Appendix 5).
The use of warm water in routine life for drinking purposes was reported as the most commonly adopted measure, followed by the practice of yoga or pranayama as a choice for the maintenance of health and well-being. Among homeopathy medicines, Arsenicum Album 30C was the intervention of choice, while Samshamani Vati and AYUSH-64 were the most popular among the Ayurveda interventions. Kaba Sura Kudineer, a decoction used in the Siddha system, and the Unani interventions of Behidana, Unnab, and Sapistan decoction were reported as the most commonly used, albeit by only a small proportion of users. Agastya Hareetaki (an Ayurvedic intervention); use of Anu Taila, coconut oil, or sesame oil for nasal instillation, or oil pulling with coconut or sesame oil; use of Chyavanprasha; turmeric milk; and herbal tea were the other frequently used interventions in the Ayurveda stream. Bryonia alba, Rhus toxicodendron, Belladonna, Gelsemium, and Eupatorium perfoliatum were the other commonly used homeopathic interventions. Nilavembu Kudineer decoction and Adathodai Manapagu were some other popular Siddha interventions (Multimedia Appendix 6).
Benefits Obtained by the Public Through the Use of AYUSH Measures
Among the 616,295 respondents who used AYUSH measures, 231,552 (37.5%) reported using them for more than 30
Data on Pre-existing Diseases, Symptoms, and Risk Status Among the Respondents
Data on pre-existing diseases (comorbidities) were furnished by 408,089 respondents, of whom 380,731 (93.3%) reported the absence of any pre-existing disease. Hypertension was the most common pre-existing disease (comorbidity), reported by 11,941/408,089 respondents (2.9%), followed by diabetes mellitus, heart disease, and asthma. The presence of more than one pre-existing disease was reported by 9266/408,089 respondents (2.3%) ( Table 3).
Duration of Use of AYUSH and Symptom Status of COVID-19-Positive Respondents
Among the 12,002 respondents who tested positive for COVID-19 and used AYUSH measures, 8101 (67.5%) reported their duration of use of AYUSH measures as less than 30 days, and the others reported a longer duration of use. Among the 12,002 COVID-19-positive respondents using AYUSH measures, 8100 (67.5%) were asymptomatic.
Association Between Duration of Use of AYUSH Measures and Incidence of COVID-19
The results of the logistic regression analysis depict that the odds ratio (OR) of testing positive for COVID-19 is 1.52 (95% CI 1.44-1.60, P<.001) for respondents who were using AYUSH measures for less than 30 days compared to those who were using these measures for more than 30 days. The adjusted OR considering the effects of confounders, namely the presence of comorbidities and the respondent being in a risk category, is 0.90 (95% CI 0.85-0.95, P<.001) ( Table 4).
Association Between the Use of AYUSH Measures and Symptomatic Status of COVID-19 Respondents
The results of the logistic regression analysis revealed that the OR of being symptomatic was 4.01 (95% CI 3.61-4.59) for nonusers of AYUSH measures compared to users. The adjusted OR considering the effects of confounders, namely the presence of comorbidities and respondents being in a risk category, is 3.48 (95% CI 3.06-3.95) ( Table 5). Table 5. Logistic regression analysis to identify the association between use of AYUSH-based measures and symptomatic status of respondents who tested positive for COVID-19.
Principal Findings
A representative population of 723,459 people from different geolocations across the country downloaded the AYUSH Sanjivani app and reported the perceptions and practices they had adopted in the wake of the COVID-19 pandemic that significantly altered their lifestyle. A majority of the respondents used AYUSH measures for maintenance of health and prevention of disease, and most of them reported having benefitted from the use of various interventions and practices. A positive association between the prolonged practice of AYUSH measures and symptomatic status could be observed in the respondents who were infected with COVID-19.
In this study, the maximum representation was of AYUSH users, and it is expected that the willingness to use an app specifically targeting AYUSH users will be greater among health seekers who are familiar with these systems of medicine. Findings from a previous study revealed that even in a developing country such as India, 32% of the patients attending a medical care facility in urban settings used the internet, and 75% of them sought medical information through the internet; this would support the substantial amount of data generated through this app [21]. In a national representative cross-sectional survey conducted in 2014 in India, it was observed that 6.9% of all patients sought AYUSH services for different ailments in a recall period of 2 weeks, without a great differential between urban and rural regions [22]. This targets the reported use of AYUSH care services for disease management, which is expected to be lower compared to the use of the AYUSH system for preventive care.
Maximum reporting was observed from the states of Uttar Pradesh, Maharashtra, and Madhya Pradesh, which can be attributed to the high population density in these states. The decision to seek health care is not only contingent upon the experience of illness but also depends on various social, economic, and demographic factors [22]; it was observed that approximately three quarters of the total respondents were from rural areas, and most of them were users of AYUSH measures. This can be attributed to the tendency of people in rural areas to adhere more to tradition compared to the urban population. This is consistent with a study based on the World Health Organization Study on Global Ageing and Adult Health (WHO-SAGE) survey, which suggests that individuals living in rural areas are more likely to report the use of traditional healers [23]. In this study, it was observed that in the setting of India, being female was associated with a lower likelihood of users downloading mobile apps and furnishing their personal information and preferences, despite evidence from recent studies that does not suggest differential internet use between males and females [17]. Students and self-employed workers accounted for the majority of respondents as well as of users of AYUSH measures, which underlines the findings of earlier studies that predict that the younger population, literate people, and full-time workers are more likely to use health apps and be motivated to use health-related advice [24].
The majority of respondents reported having benefitted from using AYUSH measures; they rated the degree of improvement as mild, good, or excellent, and attributed this improvement to their perceived experience of overall well-being. The self-reported public experience of improvement in parameters of well-being, such as sleep, appetite, stamina, mental well-being, and sleep, is a good indicator for integrating AYUSH measures for well-being into the daily routine. Preliminary evidence on the impact of COVID-19 on the public reveals significant health-related anxiety, generalized anxiety, psychological stress, and sleep disorders, and government-implemented lockdowns inculcated many habits, such as decreased physical activity and exercise and increased snacking, with deleterious effects on vulnerable populations and especially on those with pre-existing comorbidities [25,26]. The improvements perceived in the level of well-being and the general aspects of health measured in terms of individual satisfaction with appetite, sleep, stamina, mental wellness, and bowel habits indicate a positive role for the use of traditional AYUSH interventions and practices in maintaining holistic health and preventing long-lasting adverse health outcomes.
Ayurveda and homeopathy were the systems of medicine that were preferred by the majority of the respondents; this can be attributed to the maximum number of hospitals and health care providers in India under these two AYUSH systems. Arsenicum Album 30C, Samshamani Vati, and Ayush-64, which were deployed as frontline prophylactic interventions, were the most used interventions by the public to maintain health. Arsenicum Album 30C is a homeopathic intervention that is used for respiratory ailments; meanwhile, the other two interventions are Ayurvedic formulations (Samshamani Vati and Ayush-64), which are prescribed for the clinical management of pyrexia, influenza-like illness, cough, and dyspnea [27].
The practices that the public engaged in include the practice of yoga, pranayama, and meditation along with common home remedies such as using spices in cooking, drinking turmeric milk, drinking warm water, and steam inhalation. The practice of engaging the mind and body through meditation, pranayama, and yoga has attracted significant attention and has been extensively studied for its possible beneficial effects on physical and mental health outcomes [28]. A growing body of evidence suggests that the elements of physical postures, breathing, and meditation can improve physical well-being, including balance, range of motion, blood pressure, pain, fatigue, and general health, which could be correlated with the benefits reported by the AYUSH users [29].
The proportion of participants who underwent laboratory testing for COVID-19 among AYUSH users and nonusers could not be compared to arrive at meaningful outcomes, as the majority of the respondents were using AYUSH measures, and those not using it were few in number. A longer duration of use of AYUSH measures is more likely to produce better protection when compared with use for less than 15 days, as it was observed that the likelihood of being COVID-19-positive was lower in respondents who were using the AYUSH measures for more than 30 days. In AYUSH systems, the use of diet or medicine is targeted at producing an ideal state of homeostasis, which would reflect the inherent strength of the body in immunopotentiation and prevention of diseases. Clinical studies demonstrating Rasayana activity (medicines or practices with rejuvenating potential) in healthy individuals reflect better outcomes when administered for 60 days or more, which implies that compliance with AYUSH measures would ideally require a longer duration to act in the macrochannels and microchannels of circulation to bring about optimal health; this underlines the pattern seen in this analysis, where the odds of being COVID-19-positive or symptomatic for respondents who tested positive are greater in respondents who have used the preventive measures for a lesser duration [30]. Moreover, a good proportion of the respondents who used AYUSH measures used home remedies, yoga, pranayama, meditation, and other practices without resorting to the use of any AYUSH medications with specific prophylactic potential.
The qualitative appraisal of the analyzed data reflects that a considerable majority of the respondents benefitted from the use of AYUSH measures, which were either traditional formulations with centuries of use in the maintenance of health or home remedies and are an integral part of Indian culture and cuisine. Due to the long history of use of many herbal remedies and the experiences that have passed from generation to generation, people are relying on herbal remedies and some simple home remedies for common diseases that are used across India irrespective of sociocultural, religious, and geographical differences [31]. The use of AYUSH measures is likely to evoke a positive response to the psychological and physical well-being of the respondents [32].
Limitations and Strengths of the Study
Because this is a cross-sectional analysis of the data generated from a mobile app, the documented data are a representation of smartphone users only. A limitation of the study is the inability to capture generalizable data reflecting true health-seeking trends, as only people with access to smartphones and good internet connectivity responded to the questionnaire. Because the representation of nonusers of AYUSH measures was minimal, statistical comparison among users and nonusers of AYUSH measures could not be performed. Although AYUSH measures are generally practiced in many states for both curative as well as preventive aspects, representation from some of these states was meager; hence, a true representation of users or nonusers of AYUSH measures could not be captured. The incidence of COVID-19 among the respondents was self-reported, and it is difficult to determine the relationship between the use of AYUSH measures, duration of use, and incidence of disease or symptomatic status among the general public.
Finally, this is the first study to document how time-tested indigenous systems of medicine are being used by the public during a pandemic of unprecedented spread, morbidity, and mortality. The large amount of data obtained is the greatest benefit of this analysis, as it would sweep outliers that may misrepresent the data and has enabled us to provide a realistic picture of the characteristic attributes and patterns of the population. This analysis offers a starting point for future researchers to initiate more interventional studies based on the use trends demonstrated in this study.
Conclusion
The findings of the cross-sectional analysis assert that a good proportion of the representative population has practiced AYUSH advocacy across different geolocations of the country during the COVID-19 pandemic. Although anecdotally, people report that traditional systems of holistic healing are good for the maintenance of health and well-being, our study findings also support that use of AYUSH measures provided better health, improved parameters of well-being, and even helped prevent other illnesses. This pattern suggests possibilities of exploring the role of AYUSH care, considering its acceptance, accessibility, and possible benefits, in the area of pluralistic health care.
To improve the use of a pluralistic health care delivery system, it is imperative to understand the acceptability, use trends, and possible impact on the quality of life and specific domains of health among the public. The response obtained in this study points to a possible functional integration and cross-hybridization of the merits of different systems to effectively generate a positive outcome on integrated health care delivery targeting universal health coverage. To assess the multiple levels of impact of AYUSH preventive measures on health, future studies need to apply diverse disciplines and methods, including intervention studies, longitudinal cohort studies, as well as qualitative observations to examine the nature of the benefits offered by these measures. | 7,568.4 | 2021-05-07T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Fatigue behaviour of open-hole samples and automotive mini-structures made of woven glass-fibre-reinforced polyamide 6 , 6
In the automotive industry, the integration of thermoplastic composite components represents a high-potential solution to the mass reduction challenge. In this study, a woven glassfibre-reinforced composite with a polyamide 6,6 matrix is considered for the purpose of being integrated into automotive parts. Tension-tension fatigue tests were conducted on [(0/90)3] openhole samples. These tests were instrumented with non-destructive techniques, namely acoustic emission and infrared thermography. Acoustic emission results showed fibre-matrix debonding and fibre breakages in open-hole samples, located around the hole. Furthermore, 3-point bending fatigue tests were performed on “omega” mini-structures. A semi-empirical model was used in order to predict the fatigue lives of both open-hole coupons and automotive mini-structures. Predictions of the model for open-holes samples underestimate experimental fatigue lives. Nevertheless, the semiempirical model showed good results for the fatigue life prediction of composite mini-structures.
Introduction
Glass-fibre-reinforced composites with thermoplastic matrix are strategic for automotive manufacturers because of their strength-to-weight ratio, their recyclability, their production rate and their cost.Many vehicle parts are subjected to fatigue loading.The origin of this loading may be the road itself, the users or the engine.Thus, it is important for car manufacturer to be able to predict the fatigue behaviour of composite automotive parts.
According to Degrieck & Van Paepegem [1], fatigue models can be classified in three groups: fatigue life models, phenomenological models to predict residual stiffness/strength and progressive damage models.In the context of this study, fatigue life models seem to be particularly appropriate since they allow the direct determination of the number of cycles at failure for a given set of experimental conditions [2,3,4].
Fatigue life model proposed by Epaarachchi & Clausen [4] has been used by several authors in order to model the fatigue behaviour of composite materials.Mortazavian et al. [5] have developed a model based on the work of Epaarachchi & Clausen [4] in order to predict the fatigue life of short-fibre-reinforced polymer.The authors showed that this model is able to predict accurately the fatigue life of short glass fibre/PA66 composite for several temperatures, stress ratios and fibres orientations.Other authors have used the model presented in [4] in order to predict the fatigue life of composite reinforced with continuous fibres [6,7,8,9].
This study is focused on a woven glass-fibrereinforced polyamide 6,6 (referred as GFRPA66).In a previous study, the influence of fabric orientation and conditioning on fatigue damage of this composite was studied [10].Moreover, the fatigue life model proposed by [4] was investigated [11].This model showed a good ability to predict the fatigue lives of the composite material studied for different fabric orientations and conditionings.Thus, the present study is a straight continuation of these studies and tends to evaluate the capacity of this model to predict the fatigue life of openhole samples and automotive mini-structures.
Tested material
The composite material studied is made of three plies of a 2/2 twill woven glass fabric impregnated with polyamide 6,6 resin.The glass fibre fabric has a weight of 600 g/m 2 and a warp to weft ratio of 50/ 50.The fibre mass fraction (m f ) is equal to 0.63 and the void content is below 1%.The resulting composite plates are characterised by a density of 1.78 g/cm 3 that this technique has no significant influence on the material moisture content.The specimen edges were polished to remove mechanical damage caused by the cutting.
The glass fibre fabric, referred as [(0/90) 3 ], has the warp direction of each ply oriented at 0° from the tensile axis (x axis).Polyamide 6,6 being known to be very sensitive to moisture, all samples used in this study were conditioned at RH50.Mechanical properties of the material are detailed in Table 1.
Open-hole samples
Rectangular [(0/90) 3 ] samples were drilled in the center of the coupon in order to create open-hole samples.The general dimensions of the coupon were 200 x 20 x 1.57 mm.Three diameters were used for the hole, namely 6, 7 and 8 mm.
Mini-structures
Mini-structures studied are shaped by thermoforming of a [(0/90) 3 ] woven glass-fibre-reinforced polyamide 6,6 plate.The general dimensions of the mini-structure are 730 mm length and 140 mm width.The central part has an omega section (Figure 1).
Mechanical testing
Tension-tension fatigue tests on open-hole samples were performed by using an INSTRON 8501 servo-hydraulic machine.The jaws of test machine clamp 40 mm of each specimen extremity and 80 grit sand papers were used in the jaws to improve clamping.Constant amplitude loads were applied in a sinusoidal waveform at the frequency of 1 Hz in order to limit self-generated heating of the specimen.The stress ratio (R), i.e. ratio between minimum (σ min ) and maximum (σ max ) stresses, was equal to 0.1 for all tests.
3-point bending fatigue tests on mini-structures were performed by using a servo-hydraulic machine, equipped with a 100 kN load cell.Stress ratio is set equal to 0.1 and constant amplitude loads were applied in a sinusoidal waveform at the frequency of 2.5 Hz.The space between the two lower support spans is 480 mm and the span diameter is 30 mm.The load is applied in the middle of the mini-structure using also a 30 mm diameter span.
2.3
Non-destructive techniques and observations
Acoustic Emission (AE)
Acoustic emission monitoring was performed on openhole samples by using the AE system from Mistras Group.Two sensors Micro-80 with a resonant frequency of 300 kHz and an active surface diameter of 10mm were used.They were placed at the gauge extremities of the specimen using silicon grease as the coupling agent.Sensors are separated with a distance of 100 mm between their centers.The amplitude threshold has been chosen equal to 35 dB.Table 2 shows the settings of the AE system used.Each test was preceded by a data acquisition calibration step.Using a pencil lead break procedure, the acoustic wave speed as well as the attenuation phenomenon were measured.For the latter, the lead breakage operation was repeated several times between the two sensors, at regular intervals (Hsu-Nielsen method).This procedure has shown that the attenuation phenomenon is negligible in the present work.
Post-processing was done using a multi-parametric identification, based on the k-means algorithm.The kmeans algorithm aims to partition observations into k clusters by minimizing the Euclidian distance between each observation and the nearest center (C k = c 1 , c 2 , …, c k ).This algorithm is unsupervised, which means that the number of clusters k has to be known a priori.Observations are assimilated to a n-dimensional vector (X = x 1 , x 2 ,…,x n ).The k-means algorithm procedure can be detailed as follow: 1. Random initialization of the cluster center for all k-classes (C k = c 1 , c 2 , …, c k ). 2. Euclidian distance calculation between each observation and cluster centers.3. Assignment of each observation to the cluster which minimize the Euclidian distance between the observation and the cluster center.4. Calculation of the new cluster centers for the new k-classes created. 5. Go to step 2 while there is change in the coordinates of the cluster centers.
In this study, observations are acoustic events and n is chosen equal to five among all AE descriptors: amplitude, duration, rise time, energy and number of counts.Each cluster is then associated to one damage mechanism.Studies dealing with woven composite damage process have highlighted three major damage types: matrix cracking, interface damage and fibre breakage [12,13,14,15].Thus, it was chosen to create three clusters.Attribution of each cluster to one particular damage mechanism was done by using previous results for clustering based on the amplitude only.Several authors [16,17,18,19] have shown that acoustic events with lower amplitudes are associated to matrix cracking whereas those with higher amplitudes are associated to fibre breakage.The intermediate range corresponds to interface damages.Based on these results, each cluster was associated to one damage mechanism depending on its amplitude center value.
Infrared Thermography
An infrared camera from Cedip Infrared Systems with a detector resolution of 90 mm/pixel was used.The energy radiated by the specimen can be converted into temperature levels assuming that the specimen emissivity is known.In this study, this parameter could not be determined experimentally.Thus, instead of the absolute temperature, the temperature variation at the surface of the specimen has been considered.
Fatigue life model
The fatigue life model used in this study is proposed by Epaarachchi and Clausen [4].This model allows the prediction of the fatigue life using a very limited amount of experimental data.The model is based on the hypothesis that the material strength undergoes a continuous decay, following a power law as proposed by Caprino and D'Amore [3] (Eq. ( 1)) where σ N is the residual strength after N cycles, b is a positive definite constant, dependent on the material and the mode of loading, a is assumed to increase linearly with the stress amplitude.Finally, the model is presented in Eq. ( 2) [4].
where N f is the fatigue life, σ max is the maximum fatigue stress, σ u is the ultimate strength, f is the frequency, R is the stress ratio and θ is the smallest angle between the loading axis and the fibres.The parameter λ is assumed to be equal to 1.6 according to Epaarachchi et al. [4].
Hence, α and β are the only two material parameters (dependent on the mode of loading) that need to be determined using experimental data.Only one S-N curve for a given stress ratio, frequency and lay-up is necessary to determine these parameters.In this study, α and β were determined by using the Wöhler curve obtained at RH50 on [(0/90) 3 ] plain coupons.
Results and Discussion
In a previous study [11], the ability of the fatigue model presented in part 2.4 to predict the fatigue life of the GFRPA66 for plain samples with different fibre orientations and conditioning was evaluated.Based exclusively on the Wöhler curve obtained at RH50 on [(0/90) 3 ] fabric, the model is able to predict correctly the fatigue life of GFRPA66 for any other fabric orientation and conditioning.
For example, Figure 2 shows the fatigue lives determined with this model on [(±45) 3 ] RH100 GFRPA66 plain samples.
Open-hole samples
Monotonic and fatigue tests were conducted on [(0/90) 3 ] rectangular samples, with a 7 mm diameter hole.Fatigue tests were instrumented with infrared thermography and acoustic emission.
Fatigue life
In first stage, monotonic tensile tests were conducted on open-hole samples at a crosshead speed of 1 mm/min in order to determine the tensile strength.The stress value is calculated far from the hole and is noted σ ∞ .
Fatigue tests were performed in order to determine the fatigue lives of open-hole samples.Results are shown on Figure 3.In order to apply the fatigue life model (equation 2), it is necessary to determine the local ultimate strength (σ u ) and the maximum local fatigue stress (σmax) around the hole.For that purpose, finite elements simulation was used.
Finite Element Modelling
A simplified FE model (Figure 4) was used by considering that one fabric ply, noted (0/90), is equivalent to the stacking of a 0° UD ply and a 90° UD ply.In order to ensure the symmetry of the stacking, the central woven ply is modelled as the stacking of 4 equivalent UD plies.Finally, the 3 plies [(0/90) 3 ] woven composite is modelled with the following equivalent UD layup : [0°/90/0°/90°/90°/0°/90°/0°].The elastic coefficients of the equivalent UD plies were determined in order to ensure that the stacking of 0° and 90° plies is equivalent to the woven composite (Table 3).In order to determine the ultimate local strength around the hole, point stress criterion (PSC) were used.This semi-empirical method states that the material failure occurs when the local stress at a characteristic length (d 0 = 0.42 mm) along the ligament reach the material strength.This characteristic length was determined on 7 mm diameter open-hole samples and then check on 6 mm and 8 mm diameter samples (Table 4).
Fatigue Life Estimation
The model parameters, α and β, are taken equal to those obtained on plain samples.Fatigue life estimation is shown on Figure 5. Results show that the model tends to underestimate the fatigue life of open-hole samples.It is worth noting that the higher the fatigue stress level, the higher the model underestimation.This observation is consistent with the assumption made by Epaarachchi & Clausen [4], i.e. the ultimate strength has to be obtained at the same strain rate as the specimen subjected to fatigue loading.So far, this hypothesis was not taken into account since monotonic tensile tests were performed at 1 mm/min.
Damage evolution
Fatigue tests on open-hole samples were instrumented with infrared thermography and acoustic emission.Figure 6 shows the surface temperature variation observed at the end of a fatigue test.Figure 6 shows a localized heating of the coupon on the right side of the hole indicating a damage initiation on the edge of the hole.
This observation was confirmed by acoustic emission results (Figure 7).Results show an accumulation of events related to interface and fibre damage around the hole (position = 100 mm).Meanwhile, matrix damage is recorded in the entire coupon.
Figure 8 shows the force vs. span displacement curve obtained with FE model.The numerical model shows a good fitting to experimental data until 1000 N. Beyond, some geometrical effects appear in the FE modelling leading to a deviation from the experimental data.
3-point bending fatigue
Fatigue tests were performed on automotive ministructures at three force levels.The experimental fatigue lives are shown on Figure 10.
A preliminary study was done by applying the fatigue life model with force instead of stress.This preliminary version of the model gives satisfactory results in terms of fatigue life estimation.However, it can be improved in order to be applied in an industrial context.
In order to use the semi-empirical model as it is given in equation 2, a representative local stress has to be chosen.In a first stage, the maximum local stress in the z direction (as given in Figure 9) has been chosen.The FE model was used in order to determine this local stress value (noted σ zz ) for the different maximum force levels used in fatigue tests and for the force at failure.
Then, the model presented in equation 2 was applied with the set of parameter determined for 3-point bending.Results are shown on Figure 11.The model gives a very good fitting to experimental data, proving its ability to be used in order to evaluate the fatigue life of a composite mini-structure.Additional fatigue tests on mini-structures with a different fabric orientation, [(±45) 3 ] for instance, could be used in order to validate the semi-empirical model for a different fibre orientation.This could also validate the initial choice of using the local stress in the z direction.
Conclusions
This study deals with the fatigue behavior of a woven glass-fibre-reinforced composite with PA66 matrix.Tension-tension fatigue tests were performed on open-hole samples.These tests were instrumented with acoustic emission and infrared thermography.Monitored data have highlighted a damage accumulation around the hole, identified as fibre-matrix debonding and fibre breakages.A fatigue life model was applied on openhole sample, with material parameters determined on plain samples.Results show a slight underestimation of the fatigue life.In addition, 3-point bending fatigue tests were performed on automotive mini-structures.The fatigue life model tested on open-hole samples were also applied on ministructure.As a reminder, the inputs needed for the use of this model are: -A Wöhler curve obtained on rectangular samples tested in 3-point bending ; -A numerical model of the mini-structure ; -A monotonic 3-point bending test for the validation of the numerical model ; With these data, it is then possible to estimate the fatigue life of the mini-structure in 3-point bending mode.Estimated fatigue lives of the mini-structure were very close to those obtained experimentally, indicating a good capacity of this model to predict the fatigue life of GFRPA66 mini-structure.
Fig. 6 .
Fig. 6.Temperature variation at the end of a fatigue test.
Fig. 7 .
Fig. 7. Acoustic events location recorded during a fatigue test conducted on open-hole sample.
For the force-based version of the model, the tensile strength (σ u ) and the maximum fatigue stress (σ max ) were respectively replaced by the force at failure and the maximum fatigue force.Moreover, the model parameters α and β being dependent of the loading mode, 3-point bending fatigue tests were conducted on rectangular coupons in order to determine a new set of parameter.The estimated fatigue life of the composite mini-structure are shown on Figure 10 together with experimental data.
Fig. 10 .
Fig. 10.Experimental and estimated fatigue life of the GFRPA66 mini-structure with the force-based model.
Fig. 11 .
Fig. 11.Experimental and estimated fatigue life of the GFRPA66 mini-structure with the stress-based model.
. The material is provided as plates of 1.53 mm thick and coupons are cut using water jet cutting technique.It has been checked
Table 2 .
Acoustic Emission settings
Table 3 .
Elastic coefficient of the equivalent UD ply.
Table 4 .
Ultimate local strength for plain material and open-hole samples calculated with PSC method. | 3,953.6 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Introduction: Plasma Parameters and Simplest Models
Plasma is ionized gas (partially or fully). Overwhelming majority of matter in the universe is in plasma state (stars, Sun, etc.). Basic parameters of plasma state are given briefly as well as classification of plasma types: classic-quantum, ideal-nonideal, etc. Differences between plasma and neutral gas are presented. Plasma properties are determined by long distance electrostatic forces. If spatial dimensions of a system of charged particles exceed the so-called Debye radius, the system may be considered as plasma, that is, a medium with qualitatively new properties. The expressions for Debye radius for classical and quantum plasma are carried out. Basic principles of plasma description are presented. It is shown that plasma is a subject to specific electrostatic (or Langmuir) oscillations and instabilities. Simplest plasma models are given briefly: the model of “ test ” particle and model of two (electron and ion) fluids. As an example, Buneman instability is presented along with quali-tative analysis of its complicate dispersion relation. Such analysis is typical in plasma theory. It allows to easily obtain the growth rate.
Introduction
Everyone knows the three states of matter: solids, liquids, and gases. Plasma is often called the fourth state of matter; bear in mind that with increasing temperature, the following transitions take place: solids-liquids-gases-plasma. Under the last transition, atoms lose electrons. Plasma consists (along with neutral atoms) of charged particles: electrons and positively charged ions (single and/or multiple ionized). This definition of plasma is far from complete. The complete definition of plasma is, in fact, impossible. It must cover a very wide range of phenomena in a wide variety of conditions. Plasma is very common in the universe. Most of the substance in it (more than 99%) is in plasma state. Media consisting of ionized atoms is found almost everywhere. The upper layers of the Earth's and stellar atmospheres, interstellar medium, etc. actually are in plasma state. Stellar plasma is another widespread example. In the plasma of stars, in particular the Sun, reactions of the synthesis of light elements, the so-called thermonuclear reactions, provide a huge release of energy and plasma heating. Currently, scientists from many countries around the world are studying the possibility of creating such a high-temperature plasma in terrestrial conditions, setting the task of implementing controlled thermonuclear fusion and providing humanity with an inexhaustible supply of energy.
There are two fundamentally different approaches to the implementation of controlled thermonuclear fusion. The first approach is obtaining the reactions in the so-called magnetic traps. Hot plasma should not come in contact with the walls of the chamber, as this will lead to its actual destruction. The confinement of the plasma by a magnetic field, in theory, should prevent the contact of the plasma with the walls of the chamber, since the magnetic field bends the trajectory of charged particles. However, despite tremendous efforts, the task of plasma confining has not been completely solved yet. Special configurations of magnetic field, magnetic traps help only partially. Plasma is an unstable medium in which small perturbations increase and destroy its given state. Instabilities are intrinsic for plasma. It turned out that any nonequilibrium initial distribution of particles is unstable. Below we show how instability follows from general electrodynamical consideration and give an example.
The second approach is the very quick heating of plasma up to thermonuclear temperatures. The reaction itself and energy removal also take place quickly. These processes recur very fast, and confinement of such plasma is not needed. This approach is called inertial fusion. It was proposed when very fast heating of plasma becomes possible by using intense laser beams and/or high-current relativistic electron beams.
The development of these studies is associated with the rebirth of the concept of plasma, which arises upon the investigations of gas-discharge processes. The processes in gas-discharge plasma have also been intensively studied. The studies were associated with the development of the needs of classical and quantum electronics, for which gasdischarge appliances play an important role. Finally, solid-state plasma should be noted: electron plasma of metals and electron-hole plasma of semiconductors.
The listed series can be continued almost unlimitedly, speaking about plasma in magnetohydrodynamic and thermionic converters of thermal energy into electrical energy, about plasma in solutions of electrolytes, etc. However, the above examples are sufficient to make sure the extremely wide prevalence of plasma in nature and the importance of studying its properties.
A vast literature has been grown to describe plasma state (see, e.g., [1][2][3][4][5][6][7] and many others). Our further presentation is based on the principles of plasma electrodynamics considering plasma as a continuous medium with a large number of free charged carriers.
Plasma parameters
Plasma is an ionized gas consisting of free electrons and various types of ions and neutrals. First of all, it is necessary to know the charge e α and concentration n α of plasma components (here the index takes different values corresponding to the types of particles in the plasma, α ¼ e for electrons, α ¼ i 1 , i 2 … for ions of various types, and α ¼ n for neutrals). All plasma particles are in chaotic motion, but full thermodynamic equilibrium is absent. Usually each component has its own temperature T α , which is also necessary to know.
In solid-state and semiconductor plasma, the conception of temperature should be given more accurately. If the Fermi energy E Fα of α-type particles exceeds their thermal energy (here m α is the mass of the particle, and ℏ is the Planck constant), quantum effects should be accounted. In this case the Maxwell's distribution function does not describe the behavior of charged particles. It should be described by Fermi distribution function, and the E Fα (1) plays the role of temperature. In this case plasma is degenerate.
If neutrals are absent, plasma is fully ionized. In the opposite case, plasma is partially ionized, and it is necessary to know the level of plasma ionization. This is the ratio of neutrals' density to the density of charged particles (or to the full density of plasma).
An important characteristic peculiarity of plasma state is a very wide range of values of these (and other) parameters. For example, plasma in some stars (white dwarfs) has a density of 10 25 -10 26 cm À3 , but in the interstellar space, plasma has a density of 1-10 cm À3 . The ratio is 10 26 . The ratio of other parameter values is a bit less. This leads to important consequences. In the example above, different approaches may be required to describe the plasma inside the stars and in the interstellar space. The most interesting cases will be mentioned below.
An important condition for plasma existence is its quasi-neutrality. The condition of quasi-neutrality has the form X α e α n α ≈ 0 where the summation is made over all types of charged particles α ¼ e, i 1 , i 2 , i 3 … When it is violated, strong electric fields arise, which restore plasma quasineutrality. Violations of quasi-neutrality are possible only in spatial and temporal scales, small in comparison with the characteristic plasma scales. The temporal characteristic scale of plasma is determined by its proper oscillations, but the spatial scale is determined by the length of plasma shielding (Debye length; see below).
Langmuir frequency
Plasma, as a medium with a large number of free charged particles, is a subject to oscillations. Consider in detail the oscillations of uniform electron plasma. Ions are heavy (immobile) and serve for charge neutralization. Let a small displacement of an electron layer relative to the ions take place (see Figure 1). We denote the displacement vector by X. The density of the uncompensated electron charge at the displacement X may be found from the continuity equation: (3) Figure 1.
Oscillations of the electron layer. This charge creates an electric field E, the value of which may be determined from Poisson's equation: Hence, we can write (given that for Thus, the field E is parallel to the displacement of electrons and acts on each electron with a force tending to return the electron to its original equilibrium position. As a result, we have the equation of motion of an electron in the form This equation describes the oscillations of plasma electrons near the equilibrium position (X ¼ 0) with a frequency which is known as the electron Langmuir frequency. If one uses MKS units, the expression for electron Langmuir frequency is where ε 0 is the dielectric permittivity of vacuum. Violations of plasma quasineutrality are possible only on a temporal scale, small in comparison with time τ $ 1=ω Le .
Gas parameter. Debye length
The behavior of an ionized gas is determined by long-distance electrostatic forces. These forces significantly influence on the plasma behavior and, actually, determine its parameters. First of all it is necessary to find out under what conditions a system of electrostatically interacting particles can be considered as a gas. The main peculiarity of a gas is the following: its particles interact during very small time intervals only (during collisions); the rest time every particle moves independently on others. At distances exceeding the size of the gas molecules, there is no interaction (its potential is equal to zero). Or, in other words, the potential energy of a particle is much lesser than their kinetic energy. In this case, the ratio of the distance, at which the interaction between the particles is significant to the average distance between particles, is small [8]: (here a is the molecule size, r h i is the average distance between particles, and n is the density). The condition (10) also holds for the interaction of electrons with neutrals and ions with neutrals. However, if we consider long-distance interaction between charge particles, the parameter Λ G (gas-like parameter) requires rethinking. Its physical meaning becomes slightly different. The gas approximation is valid if the energy of the interaction between particles U r h i ð Þ is smaller than the average thermal energy T of the particles itself, i.e.
In other words, the following parameter, determining the plasma state must be small. The first condition (for a neutral gas) means that in a sphere with a radius equal to the radius of interaction, there are few particles. The meaning of a similar condition for plasma is the opposite. To prove this we, first of all, determine the interaction radius in plasma. For the determination we consider in detail the potential of a test particle in plasma. Let a particle with a charge q be placed at the point r ¼ 0. We intend to find its potential φ from Poisson's equation. Assuming, for simplicity, that the charge of the single type of singly charged ions is not changed by the test particle (i.e., e i ¼ Àe), we can write Poisson's equation in the form where Δ is the Laplace operator and δ r ð Þ is the Dirac function. Assuming that eφ j j<< T e , T i , we find where is the so-called Debye radius. It shows the distance, in which the Coulomb forces are acting in plasma. Outside of the Debye radius, the interaction between charged particles is exponentially small and may be neglected. Comparative characteristics of the two curves are given in Figure 2. Curve (a) presents Debye potential, and curve (b) presents the vacuum potential $ 1=r.
The electrostatic forces are, in fact, shielded. Now we can compare the average distance between charged particles with the Debye radius and make sure that the number of particles in the Debye sphere is large. For a simple case of plasma with singly charged ions and T e ≈ T i ≈ T, we have This condition is essentially the opposite of analogous condition for gas (10). In a gas, the particles generally do not interact. The interaction takes place only at very short intervals during collisions. In plasma, on the contrary, particles experience an interaction almost always. But, at the same time, the interaction is weak. It does not outrage their movement.
Debye radius, in particular, for electrons is For the quasi-neutrality of plasma, it is necessary that its characteristic dimensions L be much larger than the Debye radius L >> r D . Moreover, under this condition a system of charged particles can be considered as plasma, i.e., a material medium with qualitatively new properties. Otherwise, it is a simple collection of individual charged particles, to which vacuum electrodynamics is applicable.
Degenerate plasma
It remains to determine the gas-like parameter for degenerate plasma as well as to answer the question of whether is there in quantum plasma Debye screening. For this we first recall that the expression for average energy of the particles, which is valid both for classical and quantum cases, may be written in the following form: i.e., in the quantum case, the average energy of the state is equal to Fermi energy E F (see (1)). The gas-like parameter for degenerate plasma may be obtained if we replace T ! E F in the expression (12). It becomes Now we show that in quantum (degenerate) plasma of metals, shielding of electrostatic field also takes place and derives an expression for characteristic length for the Debye radius in degenerate plasma. The energy of free electrons is p 2 =2m. In the presence of a field with the potential Φ r ð Þ, it is p 2 =2m þ Φ r ð Þ. As a result, Fermi particles become distributed uniformly in the spherical layer between p min ¼ ffiffiffiffiffiffiffiffiffiffiffiffi 2meΦ p and p max ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi . Given this circumstance, one can find the expression for the electron density: where n 0e is the density in the absence of a field (which coincides with the density of the neutralizing ion background). Now it is not difficult to write Poisson's equation for the potential of the test particle placed at the point r ¼ 0, and the charge of which is q. The equation is The solution of this equation in the limit of weak fields, eΦ j j<< E Fe , gives the shielded Coulomb potential: with a Debye radius r De , the expression for which is The gas parameter for plasma (12) is similar to condition (10) for neutral gas in the following sense. Both of these conditions are fulfilled better for fewer densities of plasma and neutral gas. The better the gas condition is satisfied, the more ideal is plasma. For degenerate plasma (in which particles need quantum description), in the contrary, the gas condition depends on density inversely, i.e., with the increase in density, the ideality becomes better (see (19). As E F $ n 2=3 , it turns out that with the increase in density, the average energy of Coulomb interaction increases slower. As a result Λ D ð Þ G $ n À1=3 . So, the denser the degenerate metal component, the better gas condition is fulfilled for it.
The diagram below presents the areas of the charge carriers' degeneracy and areas of the gas approximation applicability. The degeneracy condition for electron plasma has the form E F > T (E F ; see (1)). In the diagram ln n vs ln T, this condition gives line 1 dividing the region of the degenerate plasma from the nondegenerate (classical) state. The condition for the applicability of the gas approximation in a nondegenerate state is Λ P ¼ e 2 n 1=3 T << 1. In the same diagram, the condition Λ P ¼ 1 gives line 2. In the degenerate state, the condition for applicability of gas approximation is Λ D ð Þ P ¼ e 2 n 1=3 =E F << 1, in which E F does not depend on T. In these conditions Λ D ð Þ P ¼ 1 gives line 3 passing through the point, where lines 1 (E F ¼ T) and 2 (Λ P ¼ 1) intersect. Therefore, region I is the region of a nondegenerate plasma with weak interaction (the gas approximation is applicable). Region II is the region in which the plasma is nondegenerate with strong interaction, i.e., classic fluid. In region III, the plasma is degenerate with strong interaction, i.e., quantum fluid. Both in region II and region III, the gas approximation is not applicable. Finally, region IV of the parameter variations characterizes degenerate plasma with a weak interaction (gas approximation is applicable).
In conclusion, we give an estimate of the applicability conditions for the gas approximation (10) and (12) for various plasmas. First of all, we note that the size of atoms and molecules is of order a $ 10 À7 -10 À8 cm and the condition for gas approximation (10) is satisfied up to n < 10 21 -10 22 cm À3 , i.e., in gases at normal temperature up to a pressure of hundred atmospheres. It is obvious that in gas plasma both in the ionosphere and in the laboratory, this condition is fulfilled perfectly, with a large margin.
A somewhat different situation holds for the condition of gas approximation in plasma (12). In the ionosphere plasma, where n e $10 6 -10 7 cm À3 and T e ≈ 10 4 K, we have Λ P ≈ 10 À4 << 1, i.e., the condition is well satisfied. In ordinary gas-discharge fluorescent lamps, as well as in discharges used in laboratory experiments, where n e ≈ 10 10 -10 14 cm À3 and T e $ 10 4 -10 5 K, the value of Λ P << 1. However, at the discharge in dense gases used in the light sources for laser pumping, as a rule, n e ≪ 10 18 -10 19 cm À3 , and T e ≪ 1-10 eV. Herewith Λ P ≈ 0:1-0.5, which indicates a significant violation of the applicability condition for the gas approximation and a significant manifestation of the properties of non-ideal plasma or, as one says, liquid effects.
In a thermonuclear plasma, in facilities with magnetic confinement, n e ≈ 10 14 -10 15 cm À3 , and T e ≈ T i ≈ 10 8 K. As a result, we have Λ P ≈ 10 À5 << 1, i.e., the ideality of plasma is guaranteed. However, in the inertial thermonuclear reactors, where experimentators strive to obtain plasma with n e $ n i $ 10 24 -10 25 cm À3 at the temperature of T ≈ 10 8 K, it turns out that Λ P ≥ 0:01 and may be even more. This, apparently, will require a consideration of weakly non-ideal plasma, especially in conditions of pollution (the presence of multiply charged ions).
Finally, a brief summary on ideality of plasma in solids is presented. Even in good conductors, such as copper, where n e ≈ 5 • 10 22 cm À3 and E F $ 1 eV, we have Λ P Λ D ð Þ p $ 1, i.e., plasma of metals is always non-ideal, and it is more correct to consider it as electron liquid. Nevertheless, it turns out that the application of the gas approximation to metals leads to good results from the point of view of the comparison with experiments. As for the electron-hole plasma of semiconductors, they are not degenerate at normal temperature, due to the small density of the carriers. For this reason, the condition of gas approximation (19) is well satisfied. Exceptions can occur only at very low temperatures.
Self-consistent approach
The main feature of plasma and plasma-like media, such as gas plasma, plasma of metals, semimetals, and semiconductors, is the presence of a large number of free charge carriers. Here we present general principles of their description as a continuous media. The term "plasma-like media" was first introduced in [6] (see also [7]), the authors of which understood such seemingly different states of matter as an ionized gas, or actually plasma; metals, semiconductors, and even molecular colloidal crystals and electrolytes may be described based on similar principlesthe principles of electrodynamics of plasma-like media. In this section we briefly formulate these principles.
A self-consistent interaction of the electromagnetic field and charge carriers takes place in plasma-like media. Field equations are the equations of Maxwell in which the current and the charge must be represented by a sum over all carriers (charged particles) in the plasma: where E is the electric field strength, B is the magnetic induction, and n α , e α , and v α are the density, charge, and the velocity of α-th carrier α ¼ e, i 1 , i 2 , i 3 … :.
The equations for fields are written in this form (i.e., in terms of E and B), because these quantities have direct physical meaning: they determine the Lorentz force F n that acts on n-th carrier of α type (it may be an electron or ion of arbitrary type): According to charge conservation law, the continuity equation must be satisfied for electrons and all types of ions, i.e., for α ¼ e, i 1 , i 2 , … The continuity equation is Here and in consideration below, we do not take into account the processes of ionization and recombination.
Eqs. (24)-(26) describe the self-consistent interaction between electromagnetic fields and statistically large numbers of charged particles (plasma). According to the set, electromagnetic fields determine the motion of charged particles. In its turn, the same electromagnetic fields are induced by moving plasma particles.
If the plasma (or plasma-like media) are in external fields (electric E 0 and/or magnetic B 0 ), the equations must be written in somewhat other form. External fields should be singled out. If the external fields are excited by external current j 0 and charge ρ 0 densities, the set (24) should be written as The external current j 0 and charge density ρ 0 do not depend on the processes in plasma. Their values, along with E 0 and B 0 , satisfy Maxwell's equations. These fields also influence on the motion of plasma particles. The expression for Lorentz force looks like (25) However, here fields E and B are induced by external charge and current also. An important conclusion follows from the sets (24) and (27). Only one additional vector quantity appears in the field equationsthe current in the plasma j (the charge density may be expressed in terms of j by solving the continuity equation): The expression (29) shows that the current, induced in plasma, depends on velocities v α which are found independently (e.g., from equations of motion). 1 In the following, we will consider linear phenomena only. This implies linear dependence j E ð Þ, which is true, if the fields are comparatively small. It is to the point to note that in spite of our (and most other) consideration, nonlinear effects reveal themselves, first of all, in plasma and plasma-like media. Under linear consideration in isotropic media (media with no preferred directions), we actually have proportional dependence of the plasma current on the electric field or This dependence represents Ohm's law for plasma, and σ is plasma conductivity for the considered case of isotropic plasma. However, if the plasma is in the external field, it loses the isotropy, and the relationship between j and E becomes much more complicated.
Waves in plasma and plasma-like media
The most important solutions of formulated set of equations are the solutions in the form of traveling waves, i.e., the solutions that depend on space coordinate r and time t as Solutions of this type are the simplest solutions. In this case the initial equations may be essentially simplified. In (24) derivations may be replaced by multiplication: 1 The expression for current j in plasma depends on the model, which is chosen for plasma description (see examples below). If one explores the most complete kinetic consideration, the expression for the plasma current changes and becomes And the initial set reduces to a set of linear algebraic equations: Any other, more complicate, solution of the initial equations in linear theory can be presented as a superposition of the simplest solutions with various amplitudes. As follows from the general principle of electrodynamics, this superposition is also a solution of the initial set. This emphasizes the importance of consideration of the solutions in the form of traveling waves.
The motions and continuity equations also may be reduced to algebraic form. Here we present the reduced form of continuity equation only as the motion equations for plasma particles depend on the model, chosen for plasma description (see below): So, the initial equations (consisting of Maxwell's, continuity, and motion equations) reduce to linear algebraic set. The condition for the existence of nonzero solutions of the set is called dispersion relation. It, actually, presents a certain relation between frequency ω and components of wave vector k. This relation helps determine ω for the given k and, vice versa, to determine k (one of its components) if ω and other components of k are given. These statements of the problem are called initial and boundary problems accordingly. These statements are widely used in plasma physics. Herewith in plasma many cases are encountered, in which solution of the initial problem gives complex frequency ω ¼ ω 0 þ iω 00 for real k. In this case the real part of the frequency Re ω ¼ ω 0 shows the frequency of the wave, but the imaginary part shows (depending on its sign) either the increasing of the wave's amplitude if ω 00 > 0 or decreasing if ω 00 < 0 in accordance to exp Àiωt ð Þ¼ exp ω 00 t ð Þexp Àiω 0 t þ ikr f g The solution of the boundary problem should be interpreted in the same manner. If the solution of the problem gives a complex component of the wave vector k, its imaginary part shows either amplification of the given wave in a given direction or its quenching.
Electrostatic waves in plasma
Plasma is a medium, where propagation of specific, electrostatic (or plasma) waves is possible. These waves have no oscillating magnetic field. These waves are also called space charge or Langmuir waves. In the waves electric field is parallel to its propagation direction k║E. Oscillations of plasma particles also are parallel to the propagation direction, i.e., the waves are purely longitudinal. These waves play the most important role in plasma and strongly influence on its stability, much more than the usual electromagnetic waves (propagation of which in plasma is also possible).
The explicit expression for plasma conductivity σ helps to obtain the dispersion relation for longitudinal waves. When we consider solutions of Maxwell's equations in the form of traveling waves, the conductivity σ depends on frequency ω and wave vector k, i.e., σ σ ω, k ð Þ. The dispersion relation for electrostatic waves in plasma can be expressed in terms of σ ω, k ð Þ. It has the following form: where ε ω, k ð Þis the well-known dielectric permittivity of the given media. Propagation of usual electromagnetic waves in plasma is also possible. The dispersion relation for this case is In a particular case of vacuum, this expression gives propagation of usual vacuum electromagnetic waves. For vacuum ε ω, k ð Þ 1, and the dispersion relation (37) takes a familiar look, ω ¼ kc.
For further development of the properties of plasma-like media, it is necessary to specify the plasma models.
The simplest plasma models
Each model of plasma specifies how its particles interact with the electromagnetic field, as well as specifies the behavior of plasma particles inside and between plasma species. Here we consider the simplest models only, leaving aside the most rigorous kinetic consideration. On examples of simple models, we show how the models work as well as some of their advantages and disadvantages.
One-particle model
We begin with the model of one, "average" (or test) particle. In this model particles interact via electromagnetic field, and the interaction, in fact, is weak (it weakly perturbs the motion of the particles). Collisions inside and between species are also taken into account. This model describes the oscillatory properties of gasdischarge and ionosphere plasma well enough. In particular, the model was successfully used for the description of radio-frequency wave propagation through the ionosphere [9].
The initial set of equations in the model of "average" particle includes Newton equations for the "average" electron and for "average" ion along with equations for electromagnetic field and continuity equation: Here v e and v i are the velocities of electrons and ions; m and M are their masses; ν en , ν ei and ν in , ν ie are the frequencies of their collisions, which determine the friction forces inhibiting their motion; ν en is the frequency of collisions of electrons with neutral atoms (molecules) and ν ei with ions, respectively; and for ions, this is ν in and ν ie . According to Newton's third law, mν ei ¼ Mν ie . A similar system of equations is also used to describe the dynamics of solid-state plasma, but in this case the meaning of the collision frequencies differs from the abovementioned. The frequencies, actually, are the inverse lifetimes of electrons and holes, respectively.
First we show how this model works on the simplest example of plasma oscillations, as well as how easy the Langmuir frequency follows from this model. Consider pure electron plasma. Ions are heavy and immobile. They serve only for neutralization of electrons' charge. Upon derivation of equations, which describe plasma oscillations, one should recall that we consider linear plasma phenomena. This means that the equations should be linearized, i.e., we consider small perturbations of physical quantities from their basic (equilibrium) state. For example, the density of electrons is considered as n e ¼ n e0 þ n 0 e where n 0 e << n e0 , and in the resulting expressions we retain the first-order terms only and neglect the terms of second-order smallness (multiplication of the first-order terms). We also take into account that the Langmuir oscillations are potential and use Poisson's equation instead of a full set of Maxwell's equations. For one more simplification, we consider one-dimensional case: let electrons oscillate along the z axis. All this leads to the following: the initial equations (motion, Poisson's, and continuity) are reduced to simple form presented below: Here v e 0 is the velocity of electrons, e and m are their charge and mass, ν ei is the frequency of electron-ion collisions, and t is the time.
For the solutions of (39) that depend on z and t in the form exp Àiωt þ ikz ð Þ , we have the equations Thus, we arrive to the set of simple algebraic equations, from which the following expression for plasma current results From (41) one can easily obtain the corresponding expression for dielectric permittivity as well as the dispersion relation, which is If ν ei << ω (realizes in most cases), the relation (42) leads to ω ¼ AEω p , i.e., we have free plasma oscillations. If one takes into account plasma collisions, he obtains small negative imaginary correction to the frequency: ω p ! ω p À iν e =2. This shows the decay of the oscillations. The decay takes place as a result of collisions.
Obtained results on plasma oscillations and their decay coincide to experimental data. In fact, the model of "average" particle describes plasma well in the considered range of frequencies. Namely this model was used by Langmuir to describe the oscillatory properties of gas-discharge plasma. Also the model was successfully used especially for describing the propagation of radio-frequency waves through the ionosphere [9]. Thereby, the model of "average" particle is justified for highfrequency range. However, in the opposite limit of low frequencies, this model does not lead to reasonable results. That is why, new, more complicate models have been explored.
Two-fluid hydrodynamic: relative electron-ion motion
The idea to consider plasma as a system consisting of electron and ion fluids arose long time ago. In this model plasma species are described by hydrodynamic equation and interact through the electromagnetic field and through the collisions. The interaction leads to various effects. In particular instability can follow from the interaction. The general theory of plasma instabilities shows that instability is a result of thermodynamically nonequilibrium initial distribution of plasma components.
The initial equations describing electron and ion fluids and their interaction are somewhat more complex than the previous case of "average" particle. They contain additional terms that follow from classical hydrodynamics: Here m and M are electron and ion mass, T e , T i , and n e , n i are their temperatures and densities. Other denotations coincide to the previous case of "average" particle above.
The equations describing the electron and ion fluid motion (43) should be supplemented by Maxwell's and continuity equations. Equations for T e and T i (energy balance equations or equations for heat) are also needed. Specific forms of these equations depend on the physical meaning of the problem, which is considered. For simplicity, it may be assumed as T e , T i ¼ const. These assumptions greatly simplify further analysis.
Here we briefly consider a simple example of two-fluid model. In order to show the role of ions and how this role can lead to instability, we consider a case in which electron fluid moves relative to ions in rest. Let u be the constant velocity of moving electrons. Neutrals are absent. The initial equations of electron and ion fluids (43) and Maxwell's and continuity equations after linearization are reduced to the following set of equations (we assume T e , T i = 0 and consider the potential oscillation in one-dimensional system and choose the z axis along u): Here v e 0 and v i 0 are the perturbations of electron and ion fluid velocities accordingly, n 0 e and n 0 i are the perturbations of their densities, n 0e and n 0i are their unperturbed densities, ν ei is the frequency of electron-ion collisions, and E is the electric field. We look for solutions of the set (44) in the form of waves propagating along the z axis $ exp Àiωt þ ikz ð Þ . In this case the equations (44) are reduced to algebraic set. If one performs further steps (determination of induced plasma current, finding plasma conductivity and dielectric permittivity) by analogy to the previous case, he arrives to the expression for dielectric permittivity of considered system, which, in this case, consists of three terms: | 8,075.2 | 2020-03-03T00:00:00.000 | [
"Physics"
] |
Heavy commercial vehicle yaw control simulation
The aim of this article is to present universal multibody dynamic model of the heavy commercial vehicle equipped with direct yaw moment control system. The presented simulation method is based on interconnection of the multibody software ADAMS and the graphical programming environment MATLAB Simulink. The main task is to demonstrate the potential effects of the direct yaw moment control using an active differential by heavy commercial vehicle with rear wheel drive.
Introduction
The technology of the yaw moment control systems is commonly used since 1990s and became a standard equipment of the personal vehicles in the market.These systems are usually brake based electronic stability control programs (ESP) which stabilize the vehicle in limit situations.However, brake-based systems have the main disadvantage, that they reduce the speed performance of the vehicle.With the increasing performance of the modern vehicles, manufacturers need to ensure vehicle stability and controllability also during fast maneuvers.Mainly because of this reason, are also the technologies of powertrain torque management increasingly developed during last two decades.Beside the use of front-back torque control couplers as BMW xDrive or Haldex coupling, are the active right-left torque control systems as Honda SH-AWD or Mitsubishi AYC investigated [1].However, these technologies are developed for personal cars only and the application to heavy commercial vehicle is missing.From the viewpoint of heavy commercial vehicles, it is obvious that the manufacturers have the primarily target in systems reliability and operational efficiency and the new technologies development is very conservative.However, also for heavy commercial vehicles could be the right-left active yaw control technology significantly advantageous.Example of such a dynamic state is described on the multibody heavy commercial vehicle model further in this article.
Multibody method of the mechanism simulation is a commonly used tool for complex mechanisms analysis and it is suitable for various types of tasks from handling robots to agricultural machinery [2].Several commercial multibody programs have specialized packages for simulations of the complete vehicles and vehicle subsystems.One of the world's most widely used solution in the field of linking multibody system and the vehicles dynamics theory is software MSC ADAMS Car.This article describes method of the ADAMS Car model interconnection with active yaw moment control algorithm in MATLAB Simulink for the analysis of right-left torque control effect on vehicle dynamic states.
Active yaw moment control
Active yaw moment control system technology with active differential creates longitudinal force differences (∆ ) between left and right wheels and thereby can directly control the yaw moment ( ) acting on a vehicle (Fig. 1).That means that tire longitudinal forces are controlled and because of this the vehicle's cornering performance is enhanced.The advantages of these kinds of systems are that they are not based on the vehicle brakes, so they are not in violation with the driver's acceleration and braking demands.Equally, there is nearly no change in the total driving or braking forces of both wheels.For this purpose, a lot of mechanisms of the active differentials were developed which are based on the additional gears between differential cage and the output shafts.For the smooth control of the torque difference usually two friction or electromagnetic clutches are used.
The additional yaw moment from longitudinal forces difference is defined as follows:
Dynamic model
The complete simulation is based on ADAMS and Simulink co-simulation technique and therefore consists of two main parts.First part is the complete vehicle multibody model built in MSC ADAMS Car and second part is the attached active yaw moment control algorithm built in MATLAB Simulink.The ADAMS and Simulink cooperation method is suitable for mechatronics system development where the control algorithm regulates complex mechanical structure as robot, manipulator or car.
Vehicle model
The simulated commercial vehicle is a two-axle off-road tipper with permanent rear-wheel drive.The chassis consists of rigid "backbone" tube with independent swinging half-axles.Front suspension is formed by two air-springs and two dampers, rear suspension is equipped with two air-springs with additional coil springs and antiroll bar.Frame with "backbone" tube, cabin, tipper body and accessories (bumper, fuel tanks etc.) is modeled as one rigid part to which the load representing part is fixed.Powertrain subsystem is represented by engine block part and driveline shafts.Between the engine block part and rear axle input shaft acted a torque whose value is calculated from throttle and clutch position, engine torque curve and actual gear ratio.Between output shafts of the classic differential additional torque is acting for simulation of the active right-left torque distribution.This torque represents the active differential which directly influences tire longitudinal force difference (∆ ) and it is managed by control algorithm in Simulink.The rear drive axle is equipped by dual wheels.In the multibody simulation of the complete vehicle plays important role the appropriate tire model.So, in this case the PAC2002 tire model which is suitable for vehicle handling simulation on an even road is used.This tire model was developed on the base of Magic Formula 6.2 tire model, but it is adapted by MSC VIBROENGINEERING PROCEDIA.MAY 2018, VOLUME 18 software company for ADAMS Car simulations.The tire model is simulated in mode including combined force-moment calculation with relaxation behavior.
For the handling simulations it is necessary to connect the dynamic model with so called vehicle test-rig.In this case ADAMS MDI-SDI Testrig which contains driver algorithm and predefined vehicle tests is used.
Control algorithm
The used control algorithm of the active yaw control system has hierarchical structure shown in Fig. 3. Sensor inputs for upper controller which ensures desired yaw torque calculation are wheel speeds, lateral acceleration, yaw rate and steering angle.Objective of the upper controller is to keep the yaw stability of the vehicle.The aim of lower controller is estimation of the right-left torque difference which produce the desired yaw torque so as to track the target yaw rate.The desired yaw rate for yaw torque estimation is calculated as follows [4]: where: Ψ -desired yaw rate, -vehicle longitudinal velocity, , -vehicle center of gravity distance from front and from rear axle, -vehicle mass, , -cornering stiffness for each front and rear tire, -wheelbase of the vehicle, -steer angle.
Active yaw control strategies usually use a feedback controller to compute reference yaw moment from the difference of the measured yaw rate and desired yaw rate.This feedback controllers usually utilize sliding mode, linear quadratic regulation, model predictive control or robust control strategies [3].In addition, the upper controller can also be feedforwarded by a map.Then the feedback controller compensates inaccuracies and disturbances or variation of the vehicle parameters.The presented control algorithm is based on feedback sliding mode controller without feedforward loop.
To sufficiently describe the vehicle dynamic state is except the vehicle yaw rate also a vehicle side slip angle estimation necessary.Otherwise, the vehicle drift with small yaw rate can occur without control algorithm determination of the vehicle instability.From this reason, active yaw control strategies with the vehicle side slip angle estimator to set the right controller boundaries are always used.
Simulations
Presented universal dynamic vehicle model with control algorithm is able to simulate a lot of various maneuvers and vehicle dynamic states.Through these simulations can be realize detailed analysis of the vehicle handling performance in various external conditions.Results of the models with and without active torque distribution can be compared and potential effects of the active yaw moment control usage by heavy commercial vehicle can be estimated.
Vehicle stabilization
The main task of the proposed system is to stabilize the vehicle at the limit situations and keep it controllable for the driver.Due to the appropriate drive torque distribution, the limits of the total lateral tire forces acting on the drive axle can be enhanced which help to keep the vehicle handling stability [5].For demonstration of the enhanced vehicle handling performance a single lane change maneuver on the road with lower friction coefficient was chosen.The friction coefficient was set to 0.5 which corresponds to drive on the wet asphalt or concrete road.The initial velocity was 50 km/h and steering wheel angle amplitude was set to 200 degrees which is front wheels steer angle about 8 degrees.Steering wheel angle during simulation is shown in Fig. 4. Simulated were both variants -with and without active differential.
Regulation of understeering
The second simulation was focused on potential vehicle understeering compensation through active differential.Heavy commercial vehicles are supposed to operate reliably in big range of loading.The transported cargo but change the understeering behavior in many cases.Therefore, it would be beneficial for vehicle handling to regulate the vehicle understeering or oversteering behavior through active differential.To demonstrate this effect on simulated vehicle the steady state constant radius cornering maneuver was chosen with radius 50 m.Vehicle accelerated during 20 secs from initial velocity 20 km/h.
Results summary
First maneuver represents transient behavior of the vehicle at the handling stability limits.Due to the active drive torque right-left distribution, the rear axle tire lateral force limit is enhanced, and the vehicle is during whole maneuver controllable.In Fig. 5 is the comparison of the resultant vehicle side slip angle during simulation.The red curve is side slip angle of the vehicle without active differential and the blue curve is vehicle with active drive torque distribution.It is obvious, that the vehicle without proposed system exceeded side slip angle limits and during 6th second of the simulation became unstable.Blue curve shows, that active differential is able to regulate effectively the side slip angle during transient dynamic state and keep the vehicle within the limits.Second maneuver represents steady state vehicle dynamic behavior and demonstrates the possibility of the vehicle understeering regulation.The comparison of the vehicle with and without active right-left torque distribution system is in Fig. 6.This graph shows dependence of the steer angle and vehicle lateral acceleration.Red curve is the steer angle of the vehicle without active yaw moment control and it is obvious, that with increasing lateral acceleration, the driver has to increase the steering wheel angle too, to compensates the understeering.The blue curve shows, that by the vehicle with active differential, the tire longitudinal force difference on the drive axle is able to keep the vehicle to follow constant radius trajectory also at higher lateral acceleration.
Conclusions
The presented method of the mechatronic system development shows potential of the active yaw moment control by heavy commercial vehicle.Due to the multibody model of the complete vehicle it is possible to predict dynamic behavior of complex system without building very expensive real prototype.Moreover, it is possible to verify function of the sophisticated control algorithm and analyzing the effects of the active differential.Results of the simulations show significant improvement of the vehicle handling performance according to the driver's manipulating command during steady state cornering and transient dynamic states when limit is achieved.This improvement can play important role for vehicle maneuverability and active safety.
Fig. 2 .
Fig. 2. Graphical representation of the dynamic vehicle model
Fig. 5 .
Fig. 5. Vehicle side slip angle comparison during single lane change maneuver on wet road | 2,520 | 2018-05-22T00:00:00.000 | [
"Engineering"
] |
Experience Control Analysis of English Reading Software Based on Wireless Binocular Line-of-Sight Sensing
This paper proposes a segmented combined English text measurement method based on two sets of orthogonal linear image sensors and one area image sensor. This method fully combines the advantages of the linear image sensor and the area image sensor in longdistance and short-distance English text measurement and can continuously perform high-precision English text tracking within a large range of viewing distance. Based on this method, a set of segmented English text measurement system is designed and constructed. This paper presents a method for extracting English word boundaries based on semantic segmentation to solve the problem of global positioning and horizontal initialization of English reading text. The semantic segmentation method based on fully convolutional networks (FCN) is analyzed, and the target classification is defined. We used the classic FCN framework and model, fine-tuned with manually annotated data, and achieved good segmentation results. For the definition and extraction of English word boundaries in English text, a piecewise linear model is used to measure the projection confidence of each English word boundary point, and the overall observation of the English word boundary is measured. When the observation confidence is high enough, combined with the English word boundaries marked in the high-precision image, the horizontal positioning is obtained by matching the weights. This paper concludes that English reading software can help learners in English learning to a certain extent, which proves that the English reading software is an effective supplement based on blended learning classrooms. Through the analysis of learners and teaching content, an English teaching model based on English reading software blended learning is designed. Experimental studies have proved that English reading software can help learners learn English, which not only expands their vocabulary but also broadens their horizons.
Introduction
The camera is a type of computer peripheral that is currently widely used. It uses an ordinary lens and a CMOS/CCD sensor to obtain an image of the subject. After special DSP processing and encoding into MPEG or other standard audio and video data, using some network interactive software, people can share each other's image resources on this machine or with interactive communicators on the Internet [1,2]. However, the existing cameras use a single lens and a CMOS/CCD sensor to form a system, which can only obtain two-dimensional images of the subject without depth information, and the communicator cannot see the subject with stereo vision, which is difficult. The comprehensive acquisition of the three-dimensional data information of the subject makes the communication lack of real experience, and data loss is serious after the digital pro-cessing and restoration of the real environment [3]. In recent years, more and more people have devoted themselves to the research work of computer stereo vision technology [4]. How to obtain the image of the photographed object is the basis of the entire stereo vision image acquisition system. 3D vision images generally use binocular images that simulate human eyes, but there are also many multieye images that use more than two cameras. The acquisition of binocular images is usually done by two cameras with a certain displacement on the same horizontal line.
English is the common language of the world today. Most countries in the world use English in diplomatic activities, government documents, official meetings, etc.; 72% of emails are done in English; 70% of publications in the world are written in English [5]. It seems that English has become world's most common language. As a compulsory course from basic education to higher education in our country, English plays an important role in people's study, work, and life. With the popularization of mobile devices and the improvement of their functions, various mobile learning platforms are gradually being promoted, and more and more people use mobile devices for mobile learning [6]. Mobile technology provides good support for text, pictures, audio, video, animation, files, and other elements. It is quickly applied to the field of English learning. Many Englishrelated mobile learning platforms have begun to appear, such as handheld listening, speaking, and sailing English learning. The English course advocates the English teaching concept of quality education, provides healthy and active teaching materials, and lays a good English foundation for students. It highlights the main status of learners and gives full play to the guiding role of teachers [7]. Good use of modern educational technology and vigorously encouraging learners to learn English through remote or online platforms can improve efficiency.
In order to solve the existing problem of large viewing distance relative to English text measurement in visual navigation applications, this paper proposes a segmented combined English text measurement method and gives a system implementation plan. Specifically, the technical contributions of this article can be summarized as follows: First, we divide the measurement task into the far segment and the near segment according to the difference of the measurement distance. The corresponding system includes a far-segment measurement module composed of two sets of orthogonal linear image sensors, a near-segment measurement module composed of an area array image sensor, and a cooperative target light source composed of near-infrared LEDs. This paper proposes a method for global positioning and initialization of English reading text based on semantic segmentation. The word-level horizontal positioning is obtained, which provides a horizontal initial value for the global positioning of subsequent English reading text.
Second, this article matches English word boundaries with images. We propose a pixel-level semantic segmentation method based on deep learning network. A semantic segmentation map is generated, and then, dynamic and static targets are analyzed to obtain English word boundaries.
Third, according to the plane assumption, the English word boundary in the image coordinate system is projected to the coordinate system, and it is matched with the English word boundary in the high-precision image to obtain the word-level positioning result. At the same time, the surrounding dynamic target interference is evaluated, which improves the robustness to dynamic target interference.
Fourth, it is proved from a quantitative perspective that English reading software has a positive impact on the improvement of learners' language proficiency. The experimental results show that the use of English reading software in the teaching of blended English courses can help learners improve their English reading performance. Based on the analysis of the questionnaire survey data and interviews, this article concludes that learners agree with English reading software. It can be seen that the English reading software environment is a further extension of classroom learning.
It not only changes the teaching mode of teachers but also makes great changes to the learning mode of students.
Related Work
Related scholars use Web technology to develop an adaptive English learning system for college students to promote the independent and personalized learning of English learners [8]. Researchers develop an English learning system with students as the main body, providing learning materials, learning tools, learning evaluation, interactive communication, and other functional modules [9]. Learners can choose relevant learning materials for learning based on their own knowledge. Teachers can enter the background to view students' learning situation and sharing of teaching results. Relevant scholars have developed a web-based English teaching platform based on the students of higher vocational colleges [10]. The platform provides functions such as grade exams, teaching supplementary topics, online self-tests, special topics, and interactive platforms to improve students' interest in English learning. Related scholars develop a mobile listening learning system for specific users with college students, providing functions such as vocabulary, reading, and testing, and improve students' listening skills through effective training for students [11]. It takes college students as the research object, with vocabulary learning as the main content, and develops a mobile English learning platform based on the Android system in order to create a good learning environment and improve the learning efficiency and learning efficiency of English learners.
Research on basic English learning through the E-Learning system has become a hot issue for foreign E-Learning scholars [12]. Scholars at the School of Information, Kyoto University in Japan use the E-Learning system to intelligently judge whether the pronunciation of English learners in elementary and middle schools is correct or not. Scholars from the Language Learning and Nature Laboratory of Simon Fraser University in Canada use the E-Learning system to enable elementary and middle school students to get timely feedback when learning English [13]. The feedback given by the system includes word spelling check, grammar check, and pronunciation check. In addition to the legacy word check, two scholars conducted a survey of learners who used this system, and 79% of the students were satisfied with the effect of the system [14]. The scholars from Ho Chi Minh City University of Science and Technology in Vietnam jointly used the E-Learning system to conduct related research on the writing evaluation of English learners in junior high schools [15]. The three scholars used an algorithm to use computers to help teachers evaluate students' compositions, reducing teachers' work pressure and using experiments to prove that the accuracy of the evaluation results using this system reached 87%, which improved the English teachers' performance in junior middle schools [16].
The scholars of the Autonomous University of Barcelona, Spain, through the online learning environment created by the E-Learning system, select 78 primary school students from Spanish primary schools and British primary schools for one-on-one mutual aid language learning [17]. Scholars have proved through experiments that with the help of the online 2 Journal of Sensors learning environment, the language learning level of mutual aid learners has been greatly improved [18]. Relevant scholars select abstract words in the English learning stage of primary and secondary schools as the content of E-Learning, conduct network learning through the E-Learning system, and conduct comparative experimental results [19]. It is proved that primary school students who learn abstract English words through the E-Learning system have better academic performance. Students who learn through classroom explanations are high. It can be seen that by designing an E-Learning English learning system that meets the needs of students and teachers, English learners can improve their English listening, reading, oral dialogue, and writing skills, as well as help English teachers improve their teaching ability and teaching philosophy [20]. Relevant scholars have used Internet technology to build a hybrid education and teaching platform based on cloud computing technology under the cloud computing platform by analyzing the current low level of informatization in the education industry and the chaotic management of teaching resources [21]. The platform effectively integrates education and teaching resources, promotes the process of education informatization, and improves the effect of education and teaching. Relevant scholars have put forward a hybrid learning and teaching management platform that combines a network teaching management system and a teaching resource platform by analyzing the current development of Internet technology and the education industry [22]. This optimizes the existing teaching methods and improves the efficiency of school teaching resource management. Researchers analyzed the investigation of factors affecting learning satisfaction in a blended learning environment and found that students' learning atmosphere, learning motivation, and interactive behavior are three of the factors that affect students' learning satisfaction [23]. By designing and implementing a hybrid learning hierarchy model based on learning satisfaction, it effectively improves the learning effect of students and provides reference guidance for subsequent learning satisfaction model research.
Segmented Visual English Text
Measurement Design
Overall Technical Solution.
In order to meet the requirements of resolution, distance range, measurement speed, and reliability in actual measurement, this paper proposes a segmented combined English text measurement method and system implementation plan. This method divides the measurement task into a far segment and a near segment. Two sets of orthogonal linear image sensors form a far segment measurement module, and an area array image sensor forms a near segment measurement module. The near-infrared LED light source is used as the active sign on the target platform. The light source adopts a layered design method. It consists of an outer ring and an inner ring. The outer ring light source is composed of 4 high-power LEDs. Two medium-power LEDs form the inner ring light source to provide a cooperation sign for the near-segment measurement module. The system composition is shown in Figure 1.
When the distance between the English text measurement system and the target platform is less than 5 meters, the target exceeds the measurement field of view of the farsegment orthogonal linear image measurement module and enters the "near-segment" measurement range. In this range, the measurement accuracy of English text is required to be high. At this time, the near-segment monocular image measurement module is used to accurately measure target's English text, and the inner ring light source composed of 4 medium-power LEDs is used to provide a cooperation sign for the measurement module.
The Design of Remote Orthogonal Linear Array Image
Measurement. In order to accurately control the image acquisition time of the four linear CCD cameras in the two camera groups in dynamic coordinate measurement, and to process the acquired images in parallel, a special image acquisition and processing circuit is designed for each linear CCD camera. The design of the circuit requires the ability to collect and process linear CCD signals in real time and to detect and output the pixel coordinates of the LED light source. This circuit is composed of three parts: linear CCD drive circuit, image buffer circuit, and image processing circuit.
The linear array CCD drive circuit mainly includes the linear array CCD image sensor, CPLD, amplifying circuit, ADC, and tri-state buffer chip. The main function of this module is to provide the required drive signal for the normal operation of the linear array CCD, to amplify and convert the image signal output by the linear array CCD, and to provide the sequence needed for the subsequent image buffering.
The linear CCD image sensor is the core component of the vision measurement system. Its selection needs to consider key characteristic parameters such as sensitivity, dynamic range, resolution, frame rate, and spectral response. According to the task requirements of the English text measurement system in this subject, the TCD142D CCD chip from Toshiba was selected, with a resolution of 2048 × 1 pixels, a pixel size of 14 μm × 14 μm, a data rate of up to 10 MHz, and a good spectral response near the wavelength of 740 nm. It has the characteristics of high sensitivity and low dark current, which can meet the needs of the system.
Circuit Design of Text Image Processing in English Reading
Software. The workflow of the text image acquisition process of English reading software is shown in Figure 2. When the DSP receives the interrupt signal from the communication link, it immediately starts the FPGA to collect the image through the IO level. Since the communication link triggers synchronous acquisition by interruption, the image acquisition is a timely response. When the remote measurement module is working and the LED light source adopts stroboscopic modulation, the modulation frequency is set to 1/5 of the frame rate, so 6 frames of images constitute a stroboscopic cycle. The DSP processes 6 frames of images as a unit, and the position of the target light source on the image can be obtained through background difference and demodulation. In the process of image access, the idea of ping-pong operation is adopted. FPGA writes images to two RAM memory blocks in a loop, and DSP reads images from free memory blocks.
Journal of Sensors
The time interval of the synchronous trigger interrupt is a fixed value, which is greater than the time required for the FPGA to acquire 6 frames of images, so it can be guaranteed that the FPGA will write the current memory block before the trigger arrives.
Global Positioning Algorithm for English
Reading Text Based on Semantic Segmentation 4.1. FCN Network. Traditional semantic segmentation mainly relies on manual feature extraction and pixel prediction and classification based on probability distribution. The classic method is to extract features through a deep convolutional neural network, layer by layer convolution pooling, and finally, complete tasks such as classification through a fully connected layer. A classic convolutional neural network structure contains a series of convolutional layers, pooling layers, and fully connected layers. Among them, the convolution layer uses multiple convolution kernels with different parameters to convolve local images to obtain multidimensional local features; the pooling layer reduces the dimensions of the multidimensional local features obtained by convolution in the spatial dimension, reducing the amount of calculation. The fully connected layer integrates a series of local features obtained above and outputs the probability of the image category according to the task target.
But in the traditional network structure, such as AlexNet and VGG-Net, the last few layers are basically fully connected layers, and the output is the result of target classification, and the pixel segmentation cannot be completed. full convolutional network (FCN) replaces the fully connected layer of the classification network with a convolutional layer and then upsamples the deconvolutional layer to obtain a prediction for each pixel category. In addition, the coarse high-level information and the fine low-level information were merged to obtain better segmentation results. A schematic diagram of semantic segmentation based on a fully convolutional neural network is shown in Figure 3.
Establishment of Surround View Camera Dataset and
Network Training. The fisheye camera used in this article requires fine tune, which requires a lot of data. However, there is almost no database for semantic segmentation of fisheye camera images. In order to expand the dataset, this article uses the above-mentioned surround-view fisheye camera to collect fisheye images, including four fisheye cameras.
Deep learning network training needs to rely on a large number of training datasets, and this article only has a small-scale dataset. In order to prevent overfitting during the network training process, this paper adopts the transfer learning method for training. This article uses the pretrained model of FCN on the Cityscapes dataset to fine-tune. Since our dataset is small and there is a certain difference between the images in the Cityscapes dataset, this article will freeze the weights of Journal of Sensors the previous convolutional layers in the pretraining model and only retrain the last two convolutional layers.
English Word Boundary Detection and Extraction.
In English text, English word boundaries are difficult to clearly define, and rule-based detection of English word boundaries is extremely difficult. This article divides English word boundaries into three major categories: F, S, and D. The roadside boundary refers to the boundary of the F area, but the boundary between D and F is not a real English word boundary. Therefore, the English word boundary B in this article is defined as the boundary between F and S: where N is the number of boundary points in the F area, I i represents the four fisheye camera images of the front, rear, left, and right; b i refers to the coordinates of the pixel in the image coordinate system, and E i represents whether it is a real English word boundary. We use a series of appropriate line segments to mark the English word boundaries defined in this article, and discretize them at equal intervals as the English word boundary images. This paper uses high-precision images to mark the prior English word boundary images. The high-precision images used in this paper are all established by the image acquisition equipment of the intelligent laboratory. During the mapping process, the vehicle-mounted panoramic camera Ladybug-5 was used for image collection, and the high-precision GPS was used to obtain location information. The collected panoramic images are subjected to inverse perspective transformation and spliced to obtain a wide range of road images.
The process of constructing feature images can be abstracted as the process of finding the union of visual feature points in the navigation coordinate system. Figure 2: The workflow of the text image collection process of English reading software. 5 Journal of Sensors use the ICP algorithm to match the two to obtain positioning. The classic ICP algorithm is an iterative optimization process, constantly looking for corresponding points and then optimizing the Euclidean distance error between the corresponding points. However, there is a systematic error in the observation of the English word boundary through the inverse perspective transformation, that is, the farther the distance from the camera is, the greater the position error obtained by the projection; the closer the point, the higher the projection accuracy, so the classic ICP algorithm is required. To make improvements, the confidence of each observation point is introduced into the calculation of the distance error, which is called the weight ICP. The following formula gives the mathematical model of the error function in the weighted ICP: Among them, q i and p i are the corresponding points, m i is the confidence of the corresponding point, and R and T are the rotation and translation matrices obtained through optimization. According to the previous analysis, the confidence is related to the distance between the projection point and the camera. The farther the distance, the lower the confidence, and the closer the distance, the higher the confidence. This paper defines a confidence evaluation model. When the distance between the projection point and the camera is less than the threshold Min, the confidence is 1; when the distance between the projection point and the camera is less than the threshold Min, the confidence is linearly reduced; when the distance is greater than Max, the confidence is 0.
Although the weight ICP can obtain more accurate matching results in most scenes, in busy scenes, there are usually a large number of obstructions surrounding camera's line of sight, and it is impossible to detect enough English word boundary points. At this time, it is difficult to match. Or the English word boundary points are far away, and the projection error will increase, the observation confidence will decrease, and the matching result will not be very good. Journal of Sensors Therefore, it is necessary to consider whether matching can be performed before matching. The above formula gives the confidence of a single projection point. In this paper, the average confidence of all projection points observed in each frame is used as a priori evaluation. Only the average confidence is higher than the threshold for matching positioning.
Analysis of Observation
Records. This article sorts out the usual performance scores of the experimental class and the control class during the experiment. Normal performance scores consist of classroom performance scores and homework scores, of which class performance scores and homework scores both account for 50%. This article uses SPSS22.0 to make a statistical analysis of the results of the two classes. The analysis results are shown in Table 1. It can be seen from Table 1 that the average scores of the experimental class and the control class are 87.5 and 86.8, respectively, the T value is 1.26, and the two-tailed test significance P value is 0.179. Since P > 0:05, it shows that there is no obvious difference between the average scores of the experimental class and the control class. It can be explained that even though the average value of the experimental class's usual scores is slightly greater than the average of the control class's usual scores, there is no significant difference between the two. In other words, the effect of the English reading APP on students' performance in the English classroom and homework during the experiment was not particularly obvious. This may be related to students' learning motivation, self-learning ability, and self-discipline.
Analysis of Test
Results. Before the experiment, the instructor tested the experimental class and the control class, respectively. The average pretest score of the experimental class was slightly higher than the average test score of the control class. The F value of the test for the homogeneity of variance of the previous test scores did not reach the significant level (P > 0:05), and the hypothesis of homogeneity of variance was accepted, that is, the variances of the two classes are equal. The T value is 1.42, and the two-tailed test significance P value is 0.152. Since P > 0:05, there is no significant difference between the experimental class and the control class. After the experiment, the instructor tested the experimental class and the control class. The comparison of the test scores before and after the experimental class is shown in Figure 4, and the comparison of the test results before and after the control class is shown in Figure 5.
Comparing the scores of the experimental class and the control class before and after the test, it can be seen that learners' reading test scores have improved. This article believes that the reason for this change is that both the experimental class and the control class are learning English during the experiment. After the experiment, both the experimental class and the control class improved. However, the T value of the average posttest score of the experimental 7 Journal of Sensors class and the control class is 2.927, and the two-tailed test P value is 0.004 (P < 0:05). There are significant differences in scores. In other words, the performance of the experimental class improved more than that of the control class. This shows that through the use of English reading software for English reading learning, learners' English reading test scores have been improved to a certain extent.
Analysis of the Results of the Questionnaire Survey.
There are a total of 21 questions in this questionnaire. The first 20 questions are closed-ended questions, and the 21st question is an open-ended question. The answers to the closed questions take the form of a Likert 5-point scale, where 1 represents "strongly disagree," 2 represents "disagree," 3 represents "uncertain," 4 represents "agree," and 5 represents "very agree." The questionnaire has a total score of 100 points. The questionnaire has 20 Likert scale questions. The total score is 20 points for very disagree, 40 points for disagreement, 60 points for uncertainty, 80 points for agree. The total score of the students in the experimental class represents their evaluation of the use of the English reading software. The higher the score, the more the experimental subjects agree with the use of the English reading software, and vice versa.
After the experiment, this paper carried out the on-site distribution and recovery of questionnaires. 100 questionnaires Journal of Sensors were distributed, 100 questionnaires were returned, and the recovery rate was 100%. Among the 100 questionnaires returned, there was no invalid questionnaire. After sorting out the questionnaire score data, this paper uses SPSS22.0 to perform statistical analysis on these data. The results are shown in Figure 6.
(1) Knowledge and skills The secondary dimensions of knowledge and skill dimensions include vocabulary, sentences, cultural background knowledge, and article information analysis. It can be seen from Table 2 that in terms of English knowledge and skills, learners' approval of the impact of English reading software on their English cultural background knowledge is the highest, followed by vocabulary, and the lowest is English article information analysis.
(2) Process and method The process and method dimensions include the secondary dimensions of learning strategies and learning methods. During the experiment, the subjects used English reading software to practice English reading, and learners' learning strategies improved. In the process of experimental research, learners use English reading software to learn English during off-class time. Learners can study anytime, anywhere, or choose their favorite English articles for reading and recitation. During the period of experimental research, students changed and improved their English reading learning strategies, thereby improving their English reading ability. Figure 7 shows the proportion of approval degree of process and method dimensions. It can be seen that the proportion of people who strongly agree is the highest.
(3) Emotional attitude The data in Figure 8 shows that learners have changed their learning interest in the process of using English reading software to learn English. By using English reading software, most learners prefer to read in English and also like to actively participate in learning activities.
Algorithm Running Time Test.
The students in the experimental class used the semantic segmentation-based global positioning algorithm for English reading text proposed in this paper. In order to verify whether the algorithm can be applied to large-scale datasets, we conducted simulation experiments on the running time of the algorithm here. With a large-scale sample set, the running time of the global positioning algorithm for English reading text based on semantic segmentation is shown in Figure 9. It can be seen that the running time of the algorithm is less than 1.1 s, which meets the real-time requirements.
Conclusion
We propose a segmented combined visual English text measurement method and build a complete English text measurement system based on this method. The system fully combines the advantages of the linear image sensor and the area image sensor in long-distance and short-distance English text measurement and can continuously perform high-precision English text tracking within a large range of distance changes. A dedicated image acquisition and processing circuit is designed for each line array camera of the remote measurement module. This circuit can flexibly acquire the target image and demodulate the imaging position of the target light source under the control of external signals. The design of the communication link combined with the system can ensure the imaging synchronization of the two sets of orthogonal line scan cameras on the hardware, which provides a guarantee for the realization of high-precision dynamic English text measurement. This paper analyzes the environment perception problems faced by the vehicle-mounted surround vision system in a complex environment and proposes the use of deep learning-based semantic segmentation technology for semantic segmentation, which provides a solid foundation for the global positioning of English reading text. The English word boundaries of the English text are extracted from the semantic segmentation image. Based on the plane assumption, they are projected to the coordinate system, combined with the English word boundary information marked in the highprecision image, and the two are matched to obtain the global initial positioning. With the rapid development of information technology, college English teaching is no longer a traditional classroom teaching, because it is impossible for students to acquire English knowledge only through school classroom reading teaching. Students need to learn English reading anytime and anywhere according to their own situation. The birth of reading software has provided great help for the further promotion of mobile language learning. Under the guidance of the concept of mobile learning, English reading teaching can be carried out in the form of integration of English reading software and English courses. Teachers should design an English classroom based on English reading software through a comprehensive analysis of English curriculum requirements and teaching content, so as to stimulate students' enthusiasm for learning and enhance their English language ability. In English reading learning, students should use mobile devices to promote their English learning based on the characteristics of English reading software. In addition, teachers can also guide students to use some good English reading software to learn English. English reading software not only makes it more convenient for students to learn English but also subtly improves students' autonomous learning ability.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 7,284.2 | 2021-10-28T00:00:00.000 | [
"Computer Science"
] |
BAK overexpression mediates p53-independent apoptosis inducing effects on human gastric cancer cells
Background BAK (Bcl-2 homologous antagonist/killer) is a novel pro-apoptotic gene of the Bcl-2 family. It has been reported that gastric tumors have reduced BAK levels when compared with the normal mucosa. Moreover, mutations of the BAK gene have been identified in human gastrointestinal cancers, suggesting that a perturbation of BAK-mediated apoptosis may contribute to the pathogenesis of gastric cancer. In this study, we explored the therapeutic effects of gene transfer mediated elevations in BAK expression on human gastric cancer cells in vitro. Methods Eukaryotic expression vector for the BAK gene was constructed and transferred into gastric cancer cell lines, MKN-45 (wild-type p53) and MKN-28 (mutant-type p53). RT-PCR and Western Blotting detected cellular BAK gene expression. Cell growth activities were detected by MTT colorimetry and flow cytometry, while apoptosis was assayed by electronic microscopy and TUNEL. Western Blotting and colorimetry investigated cellular caspase-3 activities. Results BAK gene transfer could result in significant BAK overexpression, decreased in vitro growth, cell cycle G0/G1 arrest, and induced apoptosis in gastric cancer cells. In transferred cells, inactive caspase-3 precursor was cleaved into the active subunits p20 and p17, during BAK overexpression-induced apoptosis. In addition, this process occurred equally well in p53 wild-type (MKN-45), or in p53 mutant-type (MKN-28) gastric cancer cells. Conclusions The data presented suggests that overexpression of the BAK gene can lead to apoptosis of gastric cancer cells in vitro, which does not appear to be dependent on p53 status. The action mechanism of BAK mediated apoptosis correlates with activation of caspase-3. This could be served as a potential strategy for further development of gastric cancer therapies.
Background
Apoptosis is critical, not only for tissue homeostasis, but also in the pathogenesis of a variety of diseases, including cancer. The transformation of gastrointestinal epithelial tissue to carcinomas has been shown to be associated with the progressive inhibition of apoptosis [1,2]. The Bcl-2 family of genes appears to be important in the regulation of apoptosis. Members of this family are cellular homologues that are either pro-apoptotic (Bax, Bik and Bid) or anti-apoptotic (Bcl-2 and Bcl-XL) [3,4]. Recently, BAK (a Bcl-2 homologous antagonist/killer) has been cloned as a Bcl-2-related gene [5,6]. The BAK gene encodes a 211amino acid protein with a relative molecular weight (Mr) of 23,400. Gene transfer mediated elevations in BAK protein levels accelerate apoptosis induced by growth factor deprivation in murine lymphoid, lung cancer and breast cancer cells [7], suggesting that BAK functions primarily as a promoter of apoptosis.
BAK expression has been reported in normal gastrointestinal epithelium [8]. Strong BAK immunoreactivity has been shown to be present in the gastric epithelial cells lining the gastric pits and parietal cells, whereas the selfrenewing mucous cells located below the gastric pits in the gastric neck region are immunonegative [9]. In gastrointestinal epithelial tissues, the up-regulation of BAK expression during differentiation may help to ensure that cell turnover occurs in a normal fashion. However, it has been reported that gastric tumors have reduced BAK levels when compared with the normal mucosa [10]. Moreover, mutations of the BAK gene have been identified in human gastrointestinal cancers, suggesting that a perturbation of BAK-mediated apoptosis may contribute to the pathogenesis of gastric cancer [11,12].
Several recent reports have demonstrated the feasibility of using gene transfers to treat gastric cancers [13,14]. In the current study, utilizing BAK gene transfer via plasmid vector, we investigated the apoptosis inducing effects of BAK overexpression on human gastric cancer cells in vitro, in order to explore a novel strategy for gene therapy of gastric cancer.
Cell lines and reagents
The human gastric cancer cell lines, MKN-45 (wild-type p53) and MKN-28 (mutant type p53), were obtained from the Typical Culture Center of Wuhan University (China). Cells were maintained at 37°C in a humidified atmosphere of 5% CO 2 in RPMI 1640 (Gibco PRL) supplemented with penicillin/streptomycin (100 units/ml and 100 µg/ml, respectively) and 10% fetal calf serum (FCS). Restriction enzymes and DNA recovery kits were purchased from Promega Company. T 4 DNA Ligase was purchased from the TaKaRa company. Propidium iodide was purchased from the Sigma company. TUNEL and caspase-3 activity detection kits were purchased from the Boster and Clontech companies respectively.
Vector construction
Plasmid pET-BAK was kindly provided by Dr. Patty Wendel (Cold Spring Harbour, USA), which contained the BAK cDNA (633 bp) between the restriction sites for XhoI and HindIII. Prof. Shen Qu (Department of Biochemistry, Tongji Medical College, China) provided the eukaryotic vector pcDNA 3 . XhoI and Hind III digested the plasmids, pET-BAK and pcDNA 3 , to recover the BAK gene (633 bp) and linear vector fragments (5.4 Kb) respectively. The ligation reaction was conducted with T 4 Ligase. According to the physical maps of both the BAK gene and vector, XhoI, HindIII, XhoI and HindIII digested the recombinant respectively, for restrictive enzyme reaction assay.
Gene transfer and expression
For both MKN-45 and MKN-28 cell lines, untransfer control, pcDNA 3 and pcDNA-BAK transfer groups were designed. Gene transfer was conducted with Lipofectamine 2000 (Gibco company, USA), according to the suggestion of the manufacturer. Cellular BAK mRNA expression levels were assayed by reverse transcription polymerase chain reaction (RT-PCR). Cellular total RNA extraction was conducted with TRIzol (Life Technologies company). After the reverse transcription reaction, PCR amplification reaction was conducted with the following primers: BAK (502 bp) upstream 5'-CTGCCCT CTGCT-TCTGA-3', downstream 5'-CGTT CAGGATGGGACCA-3'. α-Tubulin (295 bp) upstream 5'-CCCGTCTTCAG-GGTCTCTTG-3', downstream 5'-TTAAGGTAAGTGTAG-GTTGGG-3'. α-Tubulin served as an internal control for PCR. Amplification products were separated with 1% agarose electrophoresis and observed under ultraviolet light. The brightness ratios between BAK and α-Tubulin were evaluated with gel computer image system (Gel Doc 1000, Bio-Rad company). Cellular BAK protein expression levels were assayed via Western Blotting. The extraction, quantification and separation of proteins were conducted according to Molecular Cloning. Blots were incubated sequentially with 1% nonfat dry milk, mouse monoclonal anti-BAK antibody (Boster Company) and goat radish peroxidase-conjugated antimouse immunoglobulin G, and evaluated using an ECL Western blotting kit. α-Tubulin served as an internal control for the immunoblot. Protein band intensities were determined densitometrically using the Computer Imaging System.
Cellular growth assay
The MKN-45 and MKN-28 cells were seeded at 3 × 10 4 /ml density in 96-well chamber slides (each group had five wells). After transferring for 1d, 3d and 5d, cellular growth was detected by 3-(4,5-dimethylthiazol-2-yl)-2,5-dimethyltetrazolium bromide (MTT, Clontech) colorimetric method. Each well was given 20 µl of 0.5% MTT and cultured for another 4 h. The supernatant was discarded, then the pellet was given 100 µl of dimethyl sulfoxide. When the crystal dissolved, the optical density A values of the slides were read on an enzyme-labeled Minireader II at 570 nm. Cellular proliferation inhibition rates are calculated as (%) = (1-average A 570nm value of experimental group/average A 570nm value of untransfer control group) × 100%.
Cell cycle assay
After transferring for 3d, 2 × 10 6 cells from the above groups were collected. Cells were washed twice with 0.01 mol/L phosphate-buffer saline (PBS) and fixed in 70% ethanol overnight at 4°C. Then the cells were washed once with PBS, digested by 200 µl RNase (1 mg/ml) at 37°C for 30 minutes, and stained with 800 µl propidium iodide (50 µg/ml) at room temperature for 30 minutes. The DNA histograms were assayed by flow cytometry (Becton Dickson company) and analyzed with CELLQEST software.
Cellular apoptosis detection
Cancer cells from the groups above were collected, rinsed by PBS, fixed using 2.5% glutaraldehyde for 30 minutes, and then washed with PBS. After routine embedment and slicing, cellular ultrastructure was observed under an electronic microscope. The TdT-mediated dUTP-biotin nick end labeling (TUNEL) method detected the apoptotic rates. After blocking with 0.3% H 2 O 2 for 30 min, slices were digested by Proteinase K (20 mg/L) for 20 min, then incubated with the TUNEL reaction solution at 37°C for 60 min. Then, the slices were incubated with peroxidaseconjugated antibody at 37°C for 30 min, stained with diaminobenzidine (DAB) and counterstained using haematoxylin. Instead of the TUNEL reaction solution, PBS was used as a negative control. Under a light microscope, the volume of the apoptotic cells became smaller, with shrinking nuclei and a specific palm-yellow color in the chromosome.
Caspase-3 expression and activities assays
Cellular caspase-3 protein expression determinations came from Western Blotting (as the same method above). 2 × 10 5 cells from above groups were collected, 50 µl of cellular lysis buffer was added, and then they were incubated on ice for 10 min. After centrifugation (12000 rpm) at 4°C for 3 min, the supernatant was collected and added sequentially with 50 µl 2 × Reaction Buffer, 5 µl 1.0 mmol/L caspase-3 substrate DEVD-pNA, and incubated at 37°C for 1 h. After being transferred into 96 wells, the optical density A values of the slides were read on enzymelabeled Minireader II at the wave length of 405 nm (A 405 nm ), which is the relative activities of caspase-3.
Statistical analysis
Data analysis was performed using SPSS 10.0 statistical software.
BAK overexpression in gastric cancer cells
The eukaryotic expression vector, pcDNA-BAK (6.0 Kb), was constructed. Electrophoresis, on a 1% agarose gel, showed the BAK cDNA insertion reactions into the recombinant were successful (Fig. 1A). After electrophoresis of RT-PCR products, BAK amplification bands could be observed in both untransfer and pcDNA 3 transfer groups. The difference between them is not significant (P>0.05). After transferring with pcDNA-BAK for 3d, BAK mRNA levels in MKN-45 and MKN-28 cells were significantly increased (P<0.01) (Fig. 1B). Western Blotting and densitometric analysis indicated BAK protein bands brightness of pcDNA-BAK transferred MKN-45 and MKN-28 cells were significantly higher than those of untransferred and pcDNA 3 transferred cells respectively (P<0.01) (Fig. 1C).
Cell cycle arrest
On histograms of flow cytometry (Fig. 3), the S phase ratio in pcDNA-BAK transferred cancer cells was significantly lower than that of untransferred control, while cell cycles were arrested at the G 0 /G 1 phase. In both pcDNA-BAK transferred MKN-45 and MKN-28 cells, sub-diploid DNA peak, corresponding to the apoptotic cell fraction, could be detected.
Cellular apoptosis induced
After transferring with pcDNA-BAK, parietal MKN-45 and MKN-28 presented characteristic morphological changes of apoptosis, such as nuclear shrinking, chromatin congregating around the nuclear membrane, reduction of cell volume and integrity of nuclear membrane, which could be viewed under an electronic microscope (Fig. 4A). The TUNEL assay indicated that the apoptosis rates of untransferred MKN-45 and MKN-28 cells were 4.7%, 4.2% respectively after being cultured for 3d. After pcDNA-BAK transfer for 3d, the apoptosis rates of MKN-45 and MKN-28 cells were 21.4%, 20.1% respectively (P<0.01). (Fig. 4B).
Discussion
Gastric cancer is the most common malignant tumor of the gastrointestinal tract in the world. There has been a clinical need to identify ideal candidate genes for use in treating gastric cancer patients with gene transfer strategies [15]. One promising group is the pro-apoptotic members of the Bcl-2 family (Bax and BAK), which have been shown to induce apoptosis after gene transfer via plasmid vectors in vivo [16,17].
The pro-apoptotic BAK gene is located on chromosome 6 and shares homology with the entire Bcl-2 family, including the anti-apoptotic and pro-apoptotic members [18]. In fact, the family members interact through highly conserved areas of Bcl-2 homology (BH1, BH2, and BH3) that allow hetero-and homodimerization and the consequent close regulation of apoptosis [19]. In the case of Bax, this scheme has been well worked out in a rheostat model in which excess pro-apoptotic Bax suppresses Bcl-2 and induces apoptosis via cytochrome c [20]. Less is known about BAK, but there is evidence that BAK can form heterogenous dimers with Bcl-2 or Bcl-xL to inhibit their anti-apoptotic functions [21]. The deficiency in BAK was directly responsible for the arrest in cytochrome c release. This capability was restored to the BAK-deficient cells by the insertion of recombinant BAK into purified mitochondria from these cells [22]. Furthermore, synergistic activity was detected in the presence of combinations of suboptimal doses of recombinant BAK and Bax, suggesting, that in the presence of a low dose of recombinant BAK, the resistance of these mitochondria to Baxmediated cytochrome c release was reversed [23]. As recently reported for hepatocytes from BAK-/-mice [24], BAK-deficient Jurkat cells were also resistant to tBid induction of cytochrome c release, suggesting that in these leukemic cells, BAK is involved in mitochondrial cytochrome c release induced by either tBid or Bax. Current evidence indicated that deficiency of BAK expression is closely correlated with occurrence and development of tumors [25]. Tomkova et al investigated the BAK expression in twenty cases of neoplastic skin diseases. They found seventeen were negative for immunostaining, and the other three cases showed only weak immunostaining in the regions of tumor formation [26]. Kondo et al found that positive ratios of BAK expression were negatively correlated with the pathological stages and clinical phases of gastric cancer [27]. Many kinds of tumor therapeutic drugs, such as perillyl alcohol and γ-interferon, function through up-regulating BAK expression [28]. Recently, Pataer et al reported that adenoviral-mediated overexpression of BAK could induce obvious apoptosis in lung and breast cancer cells, which provided a novel therapeutic strategy for cancer therapy [29].
In the present study, high levels of BAK protein were induced in gastric cancer cell line MKN-45 and MKN-28 (Fig. 1B,1C). It was also indicated that BAK overexpression could arrest the cell cycle (Fig. 2B) and induce apoptosis in cancer cells (Fig. 3A,3B). Current evidence indicated that the caspases, which are cysteine proteases of the ICE/CED-3 family, are the central components of the cell death machinery in various forms of apoptosis. Of course, caspase-3 is the most likely candidate for a mammalian cell death regulator by cleaving vital cellular proteins [30]. In this study, we found that the inactive caspase-3 precursor was cleaved into the 20 kDa and 17 kDa subunits, forming the active protease during BAK overexpression-induced apoptosis (Fig. 4A,4B). In addition, our studies suggest that this process is p53 independent because BAK-induced apoptosis occurred equally well in Cell cycle arrest effects of BAK overexpression on gastric cancer cells
Conclusions
The data presented in this paper show for the first time that overexpression of BAK, mediated by plasmid vectors, leads to apoptosis of gastric cancer cells in vitro. This anticancer strategy does not appear to be dependent on p53 status. The action mechanism of BAK mediated apoptosis, however, correlates with activation of caspase-3. Consistent with other finding in BAK genes, we believe BAK plays a pivotal role in the process of apoptosis. This serves as a Apoptosis inducing effects of BAK overexpression on gastric cancer cells There were no obvious apoptotic cells in the untransferred group; The pcDNA-BAK transferred cells reduced in size with obvious nuclear staining.
in flow cytometry assay. WQ carried out TUNEL assay. All authors read and approved the final manuscript. | 3,438.4 | 2004-07-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Assessing the Effect of the Relationship Marketing on the Customers ’ Loyalty in the Public and Private Banks of the Qom Province : A Case Study of Public and Private Banks of the Qom Province
Competition intensity in market places and perceiving the importance of the customers’ retention for organizations has declined them to develop and maintain long-term relations with the costumers. According to marketing scholars, relationship marketing is the best strategy for this purpose. In this article, the relation of relationship marketing factors and the customers’ loyalty, factors priority and the difference of these variables in the public and private banks in the province of Qom has been assessed. Statistical society of the present research consists of 110 customers of a private bank and a public bank in the province of Qom. The medium for data gathering was a questionnaire, and in this research, a statistical society average test and Fridman test have been used to analyze the data, correlation test to study the relation between marketing factors and customers’ loyalty, and the Two-sample T-Test to study the differences among the variables in the two types of public and private banks. And the results obtained by analyzing the data indicate that there is a positive relation between relationship marketing factors and customers’ loyalty. Commitment, trust, communication and conflict handling factors hold the first to fourth priorities in relation to the customers’ loyalty. And also among the research variables in the two types of public and private banks, no meaningful difference could be observed. KeywordsRelationship marketing; Trust; Commitment; Communications; Conflict Management; Loyalty
INTRODUCTION
The level of customers" satisfaction determines the firm"s success or failure; therefore, the firms are following up customers" retention and their loyalty.Literature show that the cost of new customers attraction due to marketing and advertisement expenses is more than the cost of customers retention, and losing a customer not only is not just losing a selling case, but also it means the loss of the whole flow of purchases which a customer could have in his/her lifecycle(kotler,1999;p.28).In this context only those organizations will have the appropriate opportunities in the competitive fields, which put the main way of their activities on satisfying the customers" desires and needs, as the high levels of satisfaction lead to more loyalty (Lovelock &Right,2003;p.175).Since early in 1980, many firms have tended to create constant interactions with their suppliers and other beneficiaries.Then early in 1983, the relationship marketing term was introduced for the first time (Wong,2004;p.86).Katler states that nowadays firms should focus on the retention and maintenance of their present customers and also on creation of a lasting and efficient relation with them.In the competitive, complex and active environment in the banking system, the smallest difference in providing services causes big transfers in the industry.Conventional banks, to a large extend are turning into a customer-based banks, according to relationship marketing principles in which customers" loyalty considers as the main purpose.In this active environment it is greatly important to create and establish strategies which lead to foster customer loyalty (Beerli et al.,2004;p.253).We require using modern marketing strategies to achieve competitive advantage in today competitive market place and also to maintain it.Relationship marketing is one of these strategies.Applying this policy along with creating a long-term communication, we can identity and enhance the activities which are important from the customers" point of view, and also we can attract more customers and make them loyal to the organization.Meanwhile, the banking industry is not excluded from this principle, and it is following up various managing strategies to attract and maintain the customers.Today, in respect of initiation of private banks, bank managers should consider customers" desires and needs to prevent from the trends of the customers towards competitors.As the relationship marketing has a long-term policy and its main objective is to provide values for the customers in long-term.So one of the ways for customers" maintenance in a long-term period in banks is to apply relationship marketing.Thus in this context the present research addresses the retention of relationship marketing and customers" loyalty in the public and private banks in the province of Qom.
SUBJECT LITERATURE
The USA marketing society defines the relationship marketing as follows "Relationship Marketing is a kind of marketing with an intellectual aim to develop and manage the long-term and reliable communications with customers, suppliers and other existing factors in marketing environment."First, relationship marketing concept was used formally by Beerli et al. in 2004, within the fields of services and he has referred to it as a strategy to attract, maintain and improve the customers" communications.Relationship marketing is the customers" retention, and improvement of communications with customers and attracting it more and more(Fontroot&Heiman.2004).Generally, there are many definitions of relationship marketing.But the definitions which are provided by the relationship marketing scholars and are used more in relationship marketing literatures were the two following definitions: a-Generally, relationship marketing is to identify, create, maintain and enhance the relations in an efficient context, with customers and parties to whom the firm has interactions, so that the objectives of all groups are met through a mutual contact.b-Relationship marketing is to consider the marketing process as a network of interactions and communications(Taghdiri&Saberi,p.32).
Katler has referred to relationship marketing as the concept of building, retention and improvement of relations with customers(Katler.1999).Gronroos has introduced the relationship marketing as a process of identifying, creating, maintaining and fortifying the communications with customers and other beneficiaries in a bilateral interest and if needed finishing those communications, at a mutual profit so that the objectives of the parties involved are met.(Gronroos.1994;p.15).Relationship marketing is the understanding and managing the relations with customers and suppliers (Shel et al.2006).Different authors have considered various underpinnings for the relationship marketing.Trust is one of the most important underpinnings of relationship marketing.Dwyer et al defined the trust as the belief that partner"s word or promise is reliable(Dwyer,Schurr and Oh,1987).Morgon and Hunt also believe that establishing the trust in a relation requires a level of confidence to the truth of the promises of a partner by every other partner.They also have viewed the reason of emphasizing the trust as a variable of relationship marketing, its necessity in establishing relationship contracts(Morgon and Hunt,1994;pp.[20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38].The second underpinning introduced for relationship marketing is commitment.Dwyer et all defined the commitment as an explicit or implicit obligation to enduring relationship between the parties of contract.Morgon and Hunt also defined the commitment as enduring desire of business partners to maintain valued relationships, and they believe that commitment will establish when one of the parties has believed the importance of the relation and applied his/her maximum effort to maintain and improve the relationship.Commitment as a desire to develop the constant relations has been defined as an interest to maintain the relations and as a belief in relations constancy.Communications, especially timely communications by solving the conflicts root lead to foster trust, and adjust expectations and intelligences.Communication plays a constant and explicit role in building and informing the cooperation and trust between the parties (Anderson&Naroos1990).The fourth underpinning of relationship marketing is conflict handling.Conflict has been described as a level of disagreement between the parties of contract that can be either implicit or explicit.Increasing the conflict in a relation causes to decrease the trust and trend to build and maintain and a long-term relation.However, conflict handling has been defined as the handling of the whole level of disagreement in formal communications.The ability of service organization to handle the conflict for retention of customers is a vital factor.Also, it should be considered that the overall suppression of the conflict leads to loss of the trust of a relation and the parties will be separated before they have been committed to a long-term and enduring relation (Dwyer and Schurr,1987;p.19).A research with the title of the effect of relationship marketing underpinnings on customers" loyalty has been applied by Bahram Ranjbaran and Mojtaba Barari in 2008 in the banks of Isfahan city.In this research, the effect of relationship marketing underpinnings including commitment, trust, communications and conflict handling on the customers" loyalty, the importance of these variables from the customers" point of view and the rate of bank success in building each of these variables, have been assessed.The results of the research indicate that in the public bank, four relationship marketing underpinnings had a positive and meaningful effect of customers" loyalty.In private banks all variables except the communication variable have had a positive and meaningful influence on the customers" loyalty.Comparing the results of this research to the present research shows that they don"t have the same findings.A research under the title of " Relationship Marketing, A policy to improve the customers" satisfaction" has been applied by Bahram Ranjbaran and Mojtaba Barari in 2009 in Isfahan.With the respect of the importance of the policy of the relationship marketing for today organizations, this article has addressed the association of relationship marketing underpinnings such as trust, commitment, conflict handling and competency with the customers" satisfaction from the bank"s services.The article was a descriptive study that has been conducted by the correlation approach from the type of multi-variable regression and its statistical society consists of 160 customers of Saman Bank in Isfahan.The instrument of gathering data in this research is questionnaire and the data have been analyzed through the regression approach.The results of this research have shown that the competence (0.253), communications (0.204), trust (0.136) and conflict handling (0.095), respectively, are associated with customers" satisfaction from the services of the Saman Bank, however the commitment hadn"t a meaningful relation with their satisfaction.The present research in terms of application objective and data gathering is a descriptivecorrelative research.The statistical society of this research includes 110 customers of a private and a public bank in the province of Qom.The medium for data gathering was a questionnaire,and in this research a statistical society average test and Fridman test have been used to analyze the data,correlation test to study the relation between marketing factors and customers'loyalty,and the twosample T-test to study the differences among the variables in the two type of public and private banks.And the results obtained from the data analysis indicated that there is a positive association among the relationship marketing factors with the customers" loyalty.Trust, commitment, communications and conflicts handling factors have the first to fourth priorities in the correlation with the customer loyalty.And also a meaningful difference has been observed among the research variables in the two public and private banks.
IMPLICIT MODEL OF THE RESEARCH
Ndubisi model, which is shown in the following figure, has been used in this research.As shown in the figure 1, this model includes four independent variables of Trust, commitment, communications, conflict handling and the dependent loyalty variable.The customers of the public and private banks have been surveyed about these 4 variables and were asked to say their opinion about the priority of each variable and the questionnaire has been provided by these four variables and their relation with customers" loyalty.
RESEARCH QUESTIONS
1. How is the association and priority of trust factor in relationship marketing with customers" loyalty? 2. How is the association and priority of commitment factor in relationship marketing with customers" loyalty? 3. How is the association and priority of communications factor in relationship marketing with customers" loyalty? 4. How is the association and priority of conflict management factor in relationship marketing with customers" loyalty? 5. How is the difference of these variables between the two private and public banks?
STATISTICAL SOCIETY, SAMPLE AND SAMPLING APPROACH
The statistical society of this research is all the customers of the private and public banks in the province of Qom.By using the cluster-available sampling approach among the private bank, Qavamin bank and among the public banks, Melli Bank of Iran have been adopted as the sample and questionnaires have been distributed among at least 3 branches of these banks, in the city and were collected.All the basis of sampling formula,110 questionnaires have been distributed in every public and private banks and were collected.
RESEARCH INSTRUMENT
In this research, the date gathering instrument is questionnaire.This questionnaire includes two general groups of questions: General Questions: in general questions it has been tried to gather the general and sociological information about the participants.Specialized Question: this part of the questionnaire consist of 22 questions that 4 of them are about trust, 5 questions about commitment, 4 about communications, 5 questions about conflict handling and finally 4 questions are about the customers" loyalty.The scale of measuring the items was the 5-score measure of Likert spectrum.
ASSESSMENT OF THE STABILITY AND VALIDITY OF QUESTIONNAIRE:
In this research, the standard Ndubisi questionnaire has been used that included 22 questions.Fifth and sixth questions of this standard questionnaire have been changed by the discretion of the supervisor.The content validity approach has been used to determine the validity of the above questionnaire.Content validity guarantees that all the dimensions and factors which can reflex our involved concepts exist in that measure (Danaiei fard et al, 2002; P.313).On this basis the questionnaire was developed and reviewed using the opinions of the experts.Also, evaluate the stability of questionnaire, some questionnaire have been distributed among the bank customers and by the means of SPSS software its Cronbach's alpha was measured and the value of 0.811 was obtained.As the obtained Cronbach's alpha for the questionnaire was more than 0.7, the stability of questionnaires has been confirmed.More than 70% of the experts were on a consensus about the questions of the questionnaire.On this basis the validity of the questionnaire was confirmed.
THE USED STATISTICAL TESTS & DATA ANALYSIS APPROACH
1-Descriptive statistics: in descriptive statistics, the frequency and frequency rate were used to describe the sample.
ANALYSIS OF DATA
The correlation spearman test was used to study the relation between the marketing and the customer"s loyalty factors in the public and private banks.As the meaningful number between variables and customers" loyalty is less that 0.05, so it can be said that there is a relation between the variables and customers" loyalty, and since the r-value is positive, this relation is a positive relation which means, the more the trust, commitment, communications and conflict handling grow up in the banks, the more loyal are the customers to the bank.These issues are referred to in table 1.In terms of the obtained results of spearman correlation coefficient test which have been shown in table 2, it can be said that there is a relation between the trust variable and customers" loyalty, and as r-value is positive (0.164), this relation is a positive one, that is whatsoever the trust factor increases in banks, the customers" loyalty will increase.By the results obtained from the Fridman test, the trust factor with the average rank of (2.35) among other factors of relationship marketing has the second priority.According to the results obtained from the Spearman correlation coefficient exam, it can be said that there is a relationship between the two variables of commitment and customers" loyalty, and since the r-value is positive (0.151), this relation is a positive one, which means the higher the commitment, the higher the customers" loyalty.According to the results obtained from the Friedman test, the component of commitment with an average rank of (3.17), enjoys from the highest ranking among other components of the relationship marketing.
According to the results obtained from the Spearman correlation coefficient exam, it can be said that there is a relationship between the two variables of communications and customers" loyalty, and since the r-value is positive (0.294), this relation is a positive one, which means the higher the communication, the higher the customers" loyalty.According to the results obtained from the Friedman test, the component of commitment with an average rank of (2.28), enjoys from the third ranking among other components of the relationship marketing.
According to the results obtained from the Spearman correlation coefficient exam, it can be said that there is a relationship between the two variables of conflict handling and customers" loyalty, and since the r-value is positive (0.224), this relation is a positive one, which means the higher the conflict handling, the higher the customers" loyalty.According to the results obtained from the Friedman test, the component of conflict handling with an average rank of (2.21), enjoys from the lowest ranking among other components of the relationship marketing.By the results obtained from the statistical society average test, it can be essentially concluded that the factors of the relationship marketing in the public and private banks are in an appropriate situation.The two-sample t-test was used to assess the difference among the variables in the two public and private banks.As it is shown in table 3 that the meaningful number of the test, the meaningful number of test for all variables of the research has been considered as greater than the meaningful level of 0.05, therefore, it can be said that no meaningful differences was observed among the research variables in the public and private banks, and the rate of these variables are relatively equal in two banks.
CONCLUSION AND SUGGESTIONS
The results of the research indicate that the commitment, trust, communications and conflict handling have respectively been associated with customers" loyalty in the public and private banks.Regarding the function of the public and private banks in these four variables, it can be said that, relatively the best function of the bank has been in the field of commitment; however it was in the average level in trust, communication and conflict handling.Also it can be said that the meaningful difference hasn"t been observed among the variables of the research in the public can private banks and the amount of these variables in these two banks are relatively equal.Bahram Ranjbaran and Mojtaba Barari also in their own research had used these four variables as the factors of relationship marketing among Isfahan Banks, and they were concluded that, in the public banks all these variables and in the private banks, except the communication variable, have had an important and meaningful effect of loyalty that relatively correlates with the results of this research.Ndubisi, also in his research has used these four variables as the factors of relationship marketing among the banks of Malaysia, and threw this conclusion that these four variables have had a significant a meaningful influence on the loyalty which completely conforms to the results of this research in the public banks.The priority of the effect of these variables was as follows: trust, communication, commitment, conflict handling.A notable point in Ndubici research is that the conflict handling has had the least effect on loyalty, while in the studied banks, especially the private banks, conflict handing had a significant effect on the customer"s loyalty.Now, by identifying the priority of the variables associated with the customers loyalty in both public and private banks and their functions in each of these variables, it can be said that in both public and private banks, the most important and effective variable on the customers" loyalty is the factor of commitment and executing their promises toward their customers and the best function has been assigned to this variable.So by the importance of variable acted appropriately, however to increase the loyalty they should focus more on this variable.The private and the public banks in the field of trust by the priority are in the second rank, so, banks should apply more effort in this field to enhance the customers" loyalty.
The next variables were communications and conflict handling that the bank function was average about them.So they need attention to increase the customers" loyalty.In the public and private banks the most important factor related to the customers" loyalty among the studied variables was the commitment factor and the most focus of banks should be on this variable.In this field the private bank was more successful than the public bank.Two other variables, trust and commitment relatively have the same importance.In the field of trust, the public banks were relatively more successful than the private banks.Thus the private banks should have had more effort for creating trust among the customers.In the field of communications, both private and public banks have acted the same.In the field of the conflict handling, the public and private banks have had a weaker function than other factors, and in this field, the private bank was more successful than the public ones.According to the obtained results, we can claim that the program of making the customers loyal in the public and private banks can be, up to a large extend, the same, because the desires and expectations of the customers toward each of these banks, up to a large extend, are the same.The companies are applying the strategies, by which they maintain their present customers, and by the data analysis and using the appropriate technology seek to obtain the timely information about their customers, to attract their satisfaction and loyalty by the constant and long-term communications with customers.Relationship marketing is one of such strategies that today the successful companies are utilizing it to achieve their purposes, and the desirable use of them is known as the constant competitive advantage in today business world, while focus on the attraction of new customers was the most essential policy of the organizations, but today the strategic and business policies are focused on retention and improvement of the customers" loyalty and increasing their trust to the organization.Mainly the constant customers develop their purchases and, as mentioned before, the selling cost, to such customers is very lower than the new or potential customers, and the constant customers continuously suggest the organizations to others.It seemes that in today business world, the focusing on and applying the principles of the relationship marketing can have a main portion in retention and maintenance of the present customers, and as a result in the efficiency of the organization, and it can be regarded as a constant competitive advantage for the organization.Due to the significant advantages of the relationship marketing, the companies are moving towards the selection of the relationship contracts rather than individual contracts.The key factor of the difference between the relationship and individual contracts is the factor of time.Individual contracts are short-term, but in contrast, the relationship contracts have been formed to long-term situations, and even they have been continued in the form of post-selling services.These contracts as increase in the competitive ability of the companies have been preferred to the individual and short-term contracts.As a result, the recognition of the relationship marketing and its extension of the dimensions , is an essential factor to maintain the market and increase the competitive ability of the companies.Therefore, an important issue which the managers should consider in the public and private banks is that the managers should have more focus on the factors which provide a proper service.The managers of the banks should move towards making the more tangible of the intangible factors for the customer.The attraction of the more efficient manpower and the wide advertisement to create a more powerful picture and brand are the other actions which the managers may consider.Eventually, it should be added that, although this research has selected one bank among each of the public and private banks as the sample, and maybe it is not possible to organize the results of this research totally to all the private and state banks, the useful results of this research cannot be neglected for the public and private banks to plan for making the customers loyal from the customers" point of view.It is notable that the result of the present research cannot be organized to other organizations and companies, and they are specifically for the banks.As the time of the activities was different in the public and private banks the comparison of them may regard limitations for the results of this research.The customers" loyalty can be different by the type of the market industry and culture.So this point can be a limitation to generalize the results of this research to other markets and industries.Finally, it is suggested that more public and private banks to be investigated in the researches especially considering the extent and time of activity to increase the generalizability of the accurate results.
2 -
Inferential statistics: in inferential statistics, the w\following test have been used: -Statistical society average test to study the importance rate of every factors which have been recognized by the customers.-Freedman test has been used to rank the factors.-Correlation test to establish the relation between relationship marketing factors and customers" loyalty.-Two sample t-test to study the differences among the variables in public and private banks. | 5,679.8 | 2014-02-14T00:00:00.000 | [
"Business",
"Economics"
] |
How Does the Pandemic Facilitate Mobile Payment? An Investigation on Users’ Perspective under the COVID-19 Pandemic
Owing to the convenience, reliability and contact-free feature of Mobile payment (M-payment), it has been diffusely adopted in China during the COVID-19 pandemic to reduce the direct and indirect contacts in transactions, allowing social distancing to be maintained and facilitating stabilization of the social economy. This paper aims to comprehensively investigate the technological and mental factors affecting users’ adoption intentions of M-payment under the COVID-19 pandemic, to expand the domain of technology adoption under the emergency situation. This study integrated Unified Theory of Acceptance and Use of Technology (UTAUT) with perceived benefits from Mental Accounting Theory (MAT), and two additional variables (perceived security and trust) to investigate 739 smartphone users’ adoption intentions of M-payment during the COVID-19 pandemic in China. The empirical results showed that users’ technological and mental perceptions conjointly influence their adoption intentions of M-payment during the COVID-19 pandemic, wherein perceived benefits are significantly determined by social influence and trust, corresponding with the situation of pandemic. This study initially integrated UTAUT with MAT to develop the theoretical framework for investigating users’ adoption intentions. Meanwhile, this study originally investigated the antecedents of M-payment adoption under the pandemic situation and indicated that users’ perceptions will be positively influenced when technology’s specific characteristics can benefit a particular situation.
Introduction
With the increasingly widespread popularity of mobile devices, our daily lives have significantly changed, especially in terms of financial transactions. Mobile payment (Mpayment) has been dramatically adopted in various industries in recent years. According to a WorldPay report, M-payments accounted for 22% of the global points of sale spending in 2019, and this percentage will increase to 29.6% in 2023 [1]. Moreover, China's overwhelming adoption of M-payments (Alipay and Wechat Pay) at the point of sale by using Quick Response (QR) codes drove nearly half (48%) of the point-of-sale payments in 2019 [1]. Various previous studies have facilitated the understanding of adoption intentions of Mpayment in different contexts [2][3][4]. However, there are still deficiencies of determinant variation and theoretical evidence of different perspectives in emergency conditions [5].
The 2019 novel coronavirus (COVID-2019) broke out in December of 2019 and has dramatically expanded globally. As of 7 December 2020, there were 66,243,918 confirmed cases of COVID-19 and 1,528,984 deaths worldwide, reported by the World Health Organization [6]. Due to the high risk of COVID-19 transmission, reducing contact among people and maintaining social distancing was highly recommended by the WHO (2020b) and Tang et al. (2020) [7,8]. In this sense, the contactless characteristic of M-payments can potentially contribute to users' mental and physical expectations to support their transaction processes and protect their safety. Accordingly, adoption of M-payment in China has significantly increased during the COVID-19 pandemic. According to a report from China banking and insurance news (2020), during the COVID-19 pandemic, the number of transactions made by M-payment was 22.4 million in the first quarter of 2020 in China, up 187% from the previous year (2019) [9]. Meanwhile, based on the CNNIC (2020) report comparing the smartphone users who used M-payment from 2019 to 2020, this percentage increased from 73.5% in June 2019 to 85.3% in March 2020 and reached 86.0% in June 2020 in China, which indicates that M-payment contributes to maintaining individual and organizational transactions during emergency situations [10]. Furthermore, users' payment habits and business models have changed from traditional face-to-face transactions to contactless M-payment transactions during the pandemic, which in turn efficiently supports the survival of various business and maintains the development of the social economy under an emergency situation. Therefore, what factors influence users' intentions to adopt M-payment during the pandemic? It becomes dramatically valuable to understand customers' behaviors under the pandemic for relevant researchers and stakeholders to comprehensively investigate information technology adoption under an emergency situation to develop business strategies correspondingly.
Traditional adoption models (e.g., Technology Acceptance Model (TAM) and the Unified Theories of Acceptance and Use of Technology (UTAUT)) evaluate users' intentions determined by technological perceptions with an obvious limitation of influence from users' mental perceptions [11,12]. Notably, based on the recommendations of governments and the WHO (2020b) regarding restrictions of direct and indirect contacts among people under the pandemic situation [7], the contactless feature of M-payment potentially influenced users' attitudes regarding the benefits of using M-payment for daily transaction, which indicates that environmental conditions affect users' mental process with regard to adopting M-payment [13]. Thus, this paper involved mental accounting theory (MAT) to explain customers' psychological cognitions of the benefits of using M-payment under a pandemic situation. In order to fill the gap of limited integration of technological and mental perceptions on technology adoption, this study incorporates MAT with UTAUT to comprehensively investigate the antecedents of M-payment adoption on users' perspectives. Specifically, perceived benefits are considered as an important factor in terms of users' expectations and will help determine their decisions [14]. Meanwhile, due to the influence of the pandemic, perceived security and trust are also considered as additional antecedents of users' adoption intentions of M-payment [15]. Perceived security is the most significant determinant of trust, positively affecting users' intentions of using M-payment [16]. Therefore, this study proposes a new adoption model, including perceived benefits, performance expectancy, effort expectancy, social influence, perceived security, trust and behavioral intention, to investigate users adopting M-payment during the COVID-19 pandemic in the following sections: Section 2: theoretical backgrounds of the utilization of M-payments during the COVID-19 pandemic, MAT and UTAUT; Section 3: development of hypotheses and research model; Section 4: research methodology and data demonstration; Section 5: data analysis; Section 6: discussion; Section 7: theoretical and practical implications; Section 8: limitations and future research recommendations; Section 9: conclusions.
M-Payment and Its Utilization under the COVID-19 Pandemic
M-payment, as an information interaction electronic financial transaction method for paying goods, services and bills by mobile devices [5], consists of three leading contactless technologies, including Short Message Service (SMS), Near Field Communication (NFC) and Quick Response (QR) codes [2]. Due to the convenient, open and secure features of Mpayment, a new business climate has been formulated by the wide adoption of M-payment, as financial transactions, are able to take place anywhere, anytime and by anyone, which has established colossal market potential in various contexts, especially under pandemic situations [17]. Many researchers have investigated various factors affecting M-payment adoption by reviewing theoretical frameworks and variables, supporting that relevant knowledge and understanding of users' adoption intentions of M-payment is determined by technological and mental perceptions, as shown in Table 1. However, few studies have analyzed the adoption intentions determined by mental and technological factors conjointly under an emergency situation. COVID-19, as a global pandemic, has dramatically influenced people's daily lives and the world economy. According to relevant studies [8] and a report from the WHO (2020b), COVID-19 has significant transmission risk by direct contact with infected people and indirect contact with surfaces in the immediate environment or with objects used on an infected person. In this sense, the contact rate can significantly contribute to the infection risk of COVID; thus, the contactless feature, as a typical characteristic of M-payment, provides mental and physical support to protect and maintain users' experience in transactions [20]. Moreover, due to the restrictions imposed by the Chinese Government to avoid direct contact and maintain social distancing during the COVID-19 pandemic, M-payment had been widely adopted for its contactless feature and trustworthy performance. Users' positive cognitions and feelings of safety when using M-payment as the main payment method have been formulated, which reduces the virus transmission risk, protects personal safety and supports the social economy [9].
Mental Accounting Theory (MAT)
Mental accounting theory (MAT), proposed by Thaler (1985), is defined as the set of individuals' cognitive operations to categorize, organize and evaluate the consequences of their decision-making in financial activities [21]. Specifically, MAT explains that personal desires influence the cognitive processes of individuals, and their psychological processes for valuing a specific technology should be taken into consideration in the environment of voluntary usage [22]. Accordingly, based on the normative principle of fungibility at the point of purchase, mental accounting is engaged, and decision-making is based on the evaluation of perceived benefits of the purchase activity [23]. Concretely, in the technology adoption aspect, a consumer's decision of adoption is based on the perceived benefits of utilization of technology [13]. Moreover, MAT can also be incorporated into an adoption model to complementarily explain customers' intentions of technology adoption [24]. Cheng and Huang (2013) incorporated MAT into TAM to investigate the mental factors affecting customers' intentions of adopting high-speed railway mobile ticketing services [25]. Park et al. (2018) proposed that the multidimensional perceived benefits of M-payment services are influenced by social influence and technology anxiety, which indicates that users' willingness of using M-payment is significant determined by the external environment and internal technological perception [14]. Furthermore, MAT provides a theoretical basis to explain consumers' decisions under conditions of risk and uncertainty [13]. Combined with the disaster of COVID-19, customers' psychological processes of adopting M-payment are significantly influenced by the contactless feature of M-payment, which is appropriately adapted to the environmental situation, public restriction and users' requirements. Therefore, MAT is appropriate to apply for explaining users' mental cognitions of using M-payment under the COVID-19 pandemic.
Unified Theory of Acceptance and Use of Technology (UTAUT)
UTAUT was developed by Venkatesh et al. (2003). It consists of performance expectancy, effort expectancy, social influence and facilitating conditions as determinants of behavioral intentions to use a new technology system [26]. UTAUT has been applied in various contexts of technology adoption. It has been revised with additional variables to explain users' behavioral intentions [4]. For example, Khalilzadeh et al. (2017) integrated security-related factors with the UTAUT model and validated that security and trust have a strong effect on customers' adoption intentions of NFC M-payments in the restaurant industry [15]. Marinković et al. (2020) modified the UTAUT model with extra variables (perceived trust and satisfaction) to evaluate customers' usage intentions of M-commerce [27]. Moreover, UTAUT has also been integrated with other models to evaluate users' behavioral intentions [2,28]. Di Pietro et al. (2015) integrated TAM, DOI and UTAUT to verify M-payment adoption intentions [2]. Oliveira et al. (2014) integrated UTAUT with the initial trust model and task-technology fit model to investigate users' behavioral intentions of adopting mobile banking in Portugal [28]. However, UTAUT focuses on technological expectations rather than mental expectations, which weakly explains users' expectations determining their intention of technology [12]. Thus, it is necessary to integrate UTAUT with MAT to explain users' technological and mental perceptions complementarily on usage intention of M-payment during the COVID-19 pandemic. The development of hypotheses and research models is illustrated in the following section.
Revisiting the MAT Perceived Benefits (PBs)
According to MAT, when consumers perform a particular behavior, they tend to evaluate a possible beneficial outcome [21]. Perceived benefits represent users' perceptions of the functional benefits of M-payment services, which determine their decisions of adoption [14]. Perceived benefits support a better understanding of users' mental perceptions of adoption intentions in various technologies, such as online shopping [29], and mobile banking [30]. Meanwhile, perceived benefits have been identified as multidimensional benefits, including utilitarian, hedonic and social values, which are determined by social influence and technology uncertainty [14,24]. However, few studies focus on the perceived benefits of technology characteristics corresponding to a particular condition. Specifically, in a pandemic situation, social distancing is an efficient way to decrease COVID-19 transmission risk among people [7,31]. Compared with traditional payments, the contactless characteristic of M-payments supports users in maintaining social distancing to avoid direct and indirect contacts from cash or point of sale terminals during a transaction process. This aspect allows users to formulate their opinions on the perceived mental and physical benefits of personal safety and provides convenience and utility when using M-payment technology as a financial transaction method in the COVID-2019 pandemic. Thus, perceived benefits are considered as a mental factor to influence the users' adoption intentions of M-payment during the COVID-19 pandemic, expressed as the following hypothesis. Performance expectancy is defined as an individual's perception in terms of the use of an information system facilitating the completion of a task and work performance [26]. Performance has been conceptualized by using attributes related to the system's efficiency, speed and accuracy in task completion [11]. Especially during the COVID-19 pandemic, users show more concern toward payment efficiency and accuracy. Concretely, in the M-payment adoption aspect, performance expectancy has significantly positive effects on users' adoption intentions in various contexts [2,3,32,33]. Therefore, when users perceive M-payment as a useful way to accomplish their transactions during the pandemic, they will choose M-payment instead of traditional payment. Accordingly, this paper proposes the following hypothesis.
Hypothesis 2.
Performance expectancy has a positive effect on the behavioral intention to adopt M-payments during the COVID-19 pandemic.
Effort Expectancy (EE)
According to UTAUT, effort expectancy is referred to as "the degree of ease associated with the use of the system" [26]. Effort expectancy influences users' attitudes toward adopting M-payment [17], revealing an even higher influence than performance expectancy [34]. Specifically, Liébana-Cabanillas et al. (2018) found that effort expectancy is the most significant factor affecting users' intentions of using NFC M-payment systems in public transportation [3]. Moreover, effort expectancy has also been verified to have a positive impact on performance expectancy in various technology adoption contexts [2,17,35]. Therefore, the following hypotheses are proposed.
Social Influence (SI)
In terms of UTAUT, the definition of social influence is "the degree to which an individual perceives that significant others believe he or she should use the new system" [26]. Slade et al. (2015) explained that it is an underlying assumption that users prefer to consult their social network to reduce any anxiety arising from uncertainty [36]. Especially during the COVID-19 pandemic, recommendations and suggestions from important, relevant people are more important for individuals' decisions and actions. From previous studies, social influence has been widely tested in the different contexts of its impact on usage intention of mobile technologies [15,24,33,36]. Morosan and DeFranco (2016) presented that social influence has a significant effect on the intention of using M-payment [37]; Kerviler et al. (2016) illustrated that social influence plays a considerable role in explaining users' intentions of using M-payment [24]. Moreover, social influence, as a determinant for formulating users' attitudes, significantly affects the perceived multibenefits of users with regard to using M-payment services [14]. Thus, relevant hypotheses are proposed as follows.
Hypothesis 5. Social influence has a positive effect on the behavioral intention to adopt M-payments during the COVID-19 pandemic. Hypothesis 6. Social influence has a positive effect on the perceived benefits to adopt M-payments during the COVID-19 pandemic.
Trust (TR)
Trust is defined as users' willingness to expect a positive outcome of technology's future performance and a subjective belief that the service provider will fulfil their obligations [38]. Meanwhile, the COVID-19 pandemic has brought uncertainty and social pressure to individuals' daily transaction processes. Trust of M-payment platforms can increase the likelihood of users using them to make contactless M-payments rather than traditional payments [27,39]. Zhu et al. (2017) validated that trust has the most significant effect on the behavioral intention to use M-payment [39]. Meanwhile, many studies have also verified the effect of trust significantly determining users' usage intentions of M-payments [16,18,39]. Zhou (2013) modified a trust-based adoption model and found that trust has significant direct and indirect impacts on the behavioral intention to use M-payment [20]. Moreover, trust has also been validated as an additional variable of UTAUT, which positively influences performance expectancy, consequently affecting user behavioral intentions to use M-payment [15]. Similar results have been found in other studies [35], including trust against perceived risk and uncertainty when adopting new technology [15,16]. Moreover, perceived risk combines uncertainty with the seriousness of the potential outcome [24], which negatively influences the multidimensional perceived benefits [14]. Thus, it can be summarized that trust has a positive impact on perceived benefits, which has also been supported by Khalilzadeh et al. (2017). Therefore, this study proposes the following hypotheses.
Hypothesis 7.
Trust has a positive effect on the behavioral intention to adopt M-payments during the COVID-19 pandemic.
Hypothesis 8.
Trust has a positive effect on performance expectancy to adopt M-payments during the COVID-19 pandemic.
Hypothesis 9.
Trust has a positive effect on perceived benefits to adopt M-payments during the COVID-19 pandemic.
Perceived Security (PS)
Perceived security is defined as "the degree to which a customer believes that using a particular M-payment procedure will be secure" [40]. In terms of conducting a financial transaction, lack of security-perception of security against the risk associated with mobile transactions-is one of the most frequent reasons of users refusing to adopt M-payments [41]. Previous studies have proved that perceived security is an important factor determining whether users will adopt M-payments [2,3,42]. Johnson et al. (2018) found that perceived security has the most significant positive impact on a user's intention to adopt M-payment [43]. Moreover, perceived security significantly increases users' trust by protecting users from transactional uncertainties and risks [15,44]. Shao et al. (2018) verified that security is the most significant antecedent of customers' trust towards affecting usage of M-payment in both male and female groups [16]. Therefore, perception of perceived security of M-payment, considered as an extra variable of UTAUT, is a crucial guarantee for establishing users' trust in using M-payment under a pandemic. Accordingly, this study proposes the following hypotheses.
Hypothesis 10. Perceived security has a positive effect on the behavioral intention to adopt M-payments during the COVID-19 pandemic.
Hypothesis 11. Perceived security has a positive effect on trust to adopt M-payments during the COVID-19 pandemic.
Research Model
Based on the above hypotheses, all measurement items were adapted from previous studies [4,8,11,[14][15][16]19,24] and have been reasonably modified to correspond to the research purposes to explain the mental and technological factors affecting users' behavioral intentions with regard to adopting M-payments under the COVID-19 pandemic. Specifically, users' adoption intentions of M-payment under COVID-19 pandemic is conjointly determined by the variables from the revised UTAUT model (for explaining users' technological perceptions) and perceived benefits, (as the variable of MAT, representing users' mental cognitions and psychological acceptance of using M-payment under pandemic conditions). The questionnaire is presented in the Appendix A. Moreover, this study revises the UTAUT model, integrating performance expectancy, effort expectancy and social influence with additional variables, perceived security, trust and perceived benefits from MAT to establish a research model, depicted in Figure 1, with the proposed hypotheses relations.
Measurement
In order to validate the proposed conceptual model and examine the research hypotheses, the online questionnaire survey was designed and applied to data collection.
Measurement
In order to validate the proposed conceptual model and examine the research hypotheses, the online questionnaire survey was designed and applied to data collection. Specifically, the questionnaire consisted of two parts. The first part contained respondents' demographic data with close-ended questions, consisting of gender, age, education, occupation and M-payment experience. The second part was developed by implementing constructs and items from previous hypotheses, consisting of 27 measurement items as indicators to explain perceived benefits, performance expectancy, effort expectancy, social influence, trust, perceived security and behavioral intention. In order to reduce confusion and save time for the participants [45,46], a five-point Likert scale (from 1 to 5, representing "strongly disagree" to "strongly agree") was applied for representing the items of each construct.
The main survey target of this research was smartphone users who used or intend to use M-payment serviced in China during the COVID-19 pandemic. In order to avoid the impact of culture and language differences, the questionnaire was translated into the Chinese language by a professional translator, and then reversely translated into English, followed by confirmation of the translation equivalence. The questionnaire data were collected from a Chinese social media platform, named Wechat, for a three-week period during the height of the COVID-19 pandemic in China, from 11 March 2020 to 31 March 2020.
Data Demographic Characteristics
According to the N: q rule proposed by Jackson (2003), an ideal sample size-toparameters ratio would be higher than 20:1 [47]; therefore, the sample size of this study should be higher than 140. This study dispatched a total 1000 online questionnaires via Wechat, 864 data were collected on 1 April. After removing the answers with missing values, a total of 739 valid questionnaires were accepted, achieving a final response rate of 73.9%. According to the guideline from Ryans (1974) [48], the Kolmogorov-Smirnov test was applied for verifying the nonresponse bias of the sample by comparing the groups between males and females. The demographic distribution of the sample was 45.74% male and 54.26% female; 53.86% of participants were in the age bracket between 21 and 30; 61.71% of participants held bachelor's or college degrees (this group is more active on social media and so more likely to respond to the questionnaire) [49]; employees and students were the two main groups of participants, with percentages of 43.03% and 23.68%, respectively; 56.16% of total responses used M-payments at least one time per day and 93.78% at least one time per one week during the COVID-19 pandemic, which is in accordance with a report from Ipsos (2020) expressing that the penetration rate of M-payment among mobile Internet users in China (those who have used M-payment in the last three months) is 96.9% [50]. The reason of this high rate of adoption of M-payment during the pandemic can be summarized as follows. Firstly, based on the restrictions from the Chinese Government [10], due to daily transactions using contact being restricted during the COVID-19 pandemic, people tended to complete the transactions in a contactless way. Secondly, according to the suggestions and recommendations from to government and WHO [7], avoiding contacts among peoples is an efficient way to reduce the transmission risk of COVID-19. Thus, M-payment had been widely adopted by customers and retailers for general transactions. Thirdly, M-payment apps were applied to track users' health statuses during the pandemic, such as Alipay Health Code being assigned a color code (green, yellow or red) to indicate users' health statuses. Therefore, M-payments are dramatically adopted by smartphone users in China not only to support daily transactions, but also to confirm their health statuses during the COVID-19 pandemic. Specific sample demographics are listed in Table 2.
Data Analysis
The covariance-based structural equation modelling (CBSEM) technique was conducted for quantitative data analysis. SPSS 17 and AMOS 22 were applied in this study, through the two-step approach suggested by Anderson and Gerbing (1988), including validating the measurement model and testing structural model. The maximum likelihood estimation was conducted in the model assessment [51].
Measurement Model
A measurement model aims to assess fitness between indicators and latent variables. Exploratory factor analysis (EFA) was applied to examine the construct reliability, and a standard method factor analysis, confirmatory factor analysis (CFA), was applied to assess the convergent and discriminant validity of the measurement model. All seven hypothesized latent constructs in the CFA model were allowed to covary and were determined by related measurement items as reflective indicators.
Construct reliability was tested by Cronbach's alpha. As presented in Table 3, all Cronbach's alpha values of latent variables are in the range of 0.807 to 0.897, all exceeding the 0.70 suggested by Nunnally and Bernstein (1994) [52], which means that construct reliability has been demonstrated. Convergent validity was assessed by standardized factor loading of all sample items. Table 3 shows that all items loadings are in the range of 0.807 to 0. All loadings are ideally greater than 0.70 [53], which demonstrates eligible convergent validity of the measurement model. Moreover, completed convergent validity was assessed by Composite Reliability (CR) and the average variance extracted (AVE) criteria. As shown in Table 4, the constructs have CRs in the range 0.811 to 0.898, all above 0.7 [54]. Meanwhile, all constructs have AVEs in a range of 0.589 to 0.688, which all meet the suggestions by Fornell and Larcker (1981) (that AVE should be higher than 0.5) [55], which means the latent variables explain more than half of the variance of the indicators. Therefore, the consistency of measurements among the indicators and latent variables has been proved. Discriminant validity reflects whether two factors are statistically different. It is evaluated by using two criteria. Firstly, according to Fornell and Larcker (1981), the square root of the AVE should be greater than all correlations between each pair of constructs [55]. Table 4 shows that for each factor the square root of AVEs is larger than its correlation coefficients between all latent constructs, which proves that each construct shared more variance with its associated indicators than with any other construct [55]. Secondly, all AVEs are greater than the maximum shared squared variance (MSV) [56]. Thus, the scales satisfy the criterion of discriminant validity suggested.
The assessment results of the measurement model validate the construct reliability and convergent and discriminant validity of constructs satisfactorily. The constructs can be used to test the structural model.
Discussion
Based on the data analysis results, ten of the eleven hypotheses were confirmed in this study, which demonstrates that the current study exhibits an appropriate adoption model to explain antecedents of users' adoption intentions of M-payment under the pandemic.
Specifically, performance expectancy had the most significant positive impact on users' adoption intentions of M-payments during the COVID-19 pandemic (Hypothesis 2), which corresponds to the vast majority of previous studies [37,60]. It can be confirmed that the utility and practicability of M-payment technology can improve users' payment efficiency under emergency situations. Especially, M-payment provided a fast payment process without any direct or indirect contacts among people, significantly influencing users' adoption intentions during the pandemic. Users will feel M-payment is a more useful and more reliable method than traditional payments to support their transactions under the pandemic.
Meanwhile, performance expectancy is significantly determined by effort expectancy (Hypothesis 4) and trust (Hypothesis 8), which is in accordance with findings from previous studies [2,35]. This study initially validated the effects of effort expectancy and trust on performance expectancy under the pandemic, which explains the absence of a confirmation of the simplicity and trustworthiness influencing the perceived functional utility when using M-payment under an emergency situation. Accordingly, the results support the accessibility and operability of the technology's interface and function are positively formulate users' performance expectancy; meanwhile, the reliability and trustworthiness of the technology's services are essential to shape the high utilization of the technology under an emergency situation.
Moreover, the second largest significant effect on users' behavioral intentions to adopt M-payments during the COVID-19 pandemic is caused by perceived benefit (Hypothesis 1). This result illustrates that perceived benefits correspond with individuals' mental expectations related to contributions of M-payment under the pandemic. Specifically, perceived benefits, such as M-payment's efficiency, not only influence users' perceived technological perceptions, convenience and utility [14,24], but also increase perceived safety benefits by M-payment's contactless characteristic. Concretely, users' mental expectations are satisfied by perceiving more reliability and safety of using contactless payment to reduces contacts among people and maintains social distancing to decrease the COVID-19 transmission risk [7,31]. Thus, perceived benefits reflect users' mental cognitions of technology's features which can overcome a particular environmental issue, which in turn significantly influences users' adoption intentions.
Meanwhile, under the condition of COVID-19 pandemic, perceived benefits as mental expectations are significantly influenced by social influence (Hypothesis 6) and trust (Hypothesis 9). The effects of social pressure and opinions of important, relevant people play an important role in influencing an individual's mental expectations, affecting his/her behavioral intention [14]. When users receive recommendations from their close friends or families indicating that M-payment is beneficial for protecting their personal safety by avoiding contact with people during a transaction process to reduce the infection risk of COVID-19, they tend to consider M-payment as a helpful and valuable payment method. Moreover, trust was analyzed and found to have a significant effect on perceived benefits in this study. The reputation and trustworthiness of M-payments are potentially determined by the contactless advantage of M-payment in optimizing users' experiences and supporting their safety during the COVID-19 pandemic, which emphasizes users' perceived benefits towards adopting M-payments during the emergency situation.
Furthermore, social influence as the third important factor has a statistically significant impact on behavioral intention (Hypothesis 5), which means the opinions, recommendations and support from close relationships of users are essential in the formulation of users' behavioral intentions to adopt M-payments during the COVID-19 pandemic. This result is supported by previous studies in normal situations [33,36]. Especially under the pandemic, people are relying more on the support and recommendations of important people in their lives-their family and close friends more easily influence their behaviors. Accordingly, the reputation of M-payment and word-of-mouth effect are considered crucial for attracting users' adoption intentions of M-payment to formulate a new payment habit by the influence of the pandemic.
In addition, this study confirms Hypothesis 7 and Hypothesis 10-trust and perceived security have statistically significant effects on explaining users' behavioral intentions of using M-payments during the COVID-19 pandemic. Specifically, consumers have developed trust in M-payment platforms through their reliable performance and mature legal framework protection, and so they worry less about financial risks to reap more benefits from the service [39]. Thereby, users' adoption intentions are influenced by technological and privacy security and users' trust from technological and mental perspectives [16,18,43]. Moreover, Hypothesis 11 also proved that perceived security significantly associates with trust. In this sense, perceptions of users' perceived security could reduce uncertainty as well as crucially guaranteeing the M-payment performance to improve users' trust of M-payment platforms [15]. It demonstrates that trust and perceived security have a significant association, and both factors conjointly determine users' adoption intentions of M-payments under the pandemic. Furthermore, M-payments involve sensitive and personal data; therefore, it is necessary to ensure the reliability and credibility of M-payment platforms for securing transactions and protecting personal information [61]. Moreover, based on the security, trustworthiness and reliability of M-payment platforms, users can accept the records of their transaction times and locations during the pandemic to be utilized by governments and health institutions to track contacts among payment processes, for monitoring, updating and reporting the pandemic transmission status. Accordingly, users can clearly and opportunely be made aware of the virus infection situation among them, which positively influences their intentions to use M-payment during the COVID-19 pandemic to reduce the infection risk.
However, Hypothesis 3 was rejected in this study, which means easiness of understanding and handling M-payment systems does not have a direct impact on a user's behavioral intentions to adopt M-payments during the COVID-19 pandemic. Similar results are supported by previous M-payment studies [3,62]. The main reason for this result is because users have become accustomed to smartphone functions and become more skillful through their previous utilization of various applications on smartphones [60]. Meanwhile, under the COVID-19 pandemic, user behavior is determined more by other perceptions related to personal safety, such as reliability, utility, security, trustworthiness and benefits, which can provide multidimensional supports for protecting transaction processes during a pandemic. Thus, the easiness of using M-payment is a less critical or surmountable factor determining users' adoption intentions during the pandemic.
Theoretical Implications
This study contributes three main theoretical implications. First, this study was empirical and examined the factors affecting users' adoption intentions of M-payments under the pandemic situation, which is absent from evaluations of previous studies. Consequently, the study dramatically enriched the literature of technology adoption during a pandemic. Specifically, this study illustrates a worthwhile direction to understand users' adoption intentions by not only examining users' perceptions from technological perspectives, but also assessing users' mental expectations. Moreover, users' technological and mental perceptions of technology are significantly influenced by the emergency situations. Therefore, this study provides a future sight for relevant research to analyze new technology adoption from technological and mental perspectives conjointly and corresponding with the specific situation, especially for emergency situations.
Second, this study integrated the UTAUT model with perceived benefits from MAT and two extra variables, perceived security and trust, which significantly contributes to the theoretical development and framework coordination of the emerging literature on information technology adoption. Simultaneously, this study demonstrates a substantial contribution to the theoretical expansion of UTAUT and MAT by initially proposing and verifying new causal paths (PB → BI, SI → PB, TR → PB, TR → PE, TR → BI and PS → BI) and rejecting the path EE → BI for investigating the interactions of variables in the new comprehensive model. Therefore, the integrative research approach presented in this study can serve as a beneficial and valuable reference to modify and evaluate new adoption models for investigating novel technology adoption.
Third, this study initially focused on technology characteristics corresponding to the pandemic situation as a potential antecedent determining users' mental and technological perceptions. Specifically, the contactless feature of M-payment avoids contacts during transaction processes and maintains social distancing, which improves the perceived multidimensional benefits of the users and optimizes their experience of using M-payments under the pandemic situation. Meanwhile, based on the disaster status of the COVID-19 pandemic, the effort expectancy became less important than other variables for determining whether users would adopt M-payment. Thus, it is important to consider whether a particular technology's features can influence users' interpretations of the perceived mental and technological benefits corresponding to particular situations or conditions to comprehensively explain technology adoption in an emergency situation.
Practical Implications
Moreover, four main practical implications are demonstrated in this study. First, the current research enhances the existing knowledge of adoption intention of M-payments in an emergency situation and enriches the understanding of how a pandemic changes users' payment habits. It suggests that a pandemic might bring suffering to people or society. Furthermore, it can also facilitate the development of new technology that can bring benefits to individuals, organizations and society to survive in the emergency situation, which is valuable for relevant stakeholders to consider the pandemic to establish appropriate business strategies.
Second, this study could be valuable to start-up companies, policymakers, government bodies and private service providers who are interested in M-payment services. M-payment has become increasingly popular and provides useful services for efficient transaction processes, particularly in emergency conditions. In the context of a pandemic, M-payment can increase personal safety perception and maintain the stable development of business. Based on the finding from this study, as well as providing an easy-to-use operating application, relevant stakeholders should initially recognize the importance of Mpayment in formulating users' perceived benefits and design system attributes accordingly under the pandemic situation. Meanwhile, M-payment service providers should guarantee the compatibility, efficiency and security of transactions to meet customers' requirements and match their lifestyles. In addition, enhancing the public impression of M-payment and stimulating a positive word-of-mouth social effect would improve the technology providers' reputation in different situations.
Third, this study supports new technology providers with a comprehensive understanding of customers' adoption intentions, determined by technological and psychological perceptions conjointly. Consequently, relevant stakeholders should focus on taking advantage of the features of technology (such as the contactless characteristic of M-payment) corresponding to its benefit to a particular situation (such as avoiding direct or indirect contacts to decrease COVID-19 transmission risk) in terms of maintaining service quality, reliability and efficiency to meet consumers' physical and mental concerns and optimize their experience, thereby increasing acceptance among the target population.
Finally, the findings and results of this study could be applied as references for other online-to-offline (O2O) service industries in a pandemic situation. Relevant businesses could utilize the results to develop appropriate strategies that combine the benefits of technology characteristics with users' technological perceptions and mental expectations to expand markets to adapt to different emergency situations and build better customer bases.
Limitations and Future Research
There are several limitations inherent in this study which need to be acknowledged. Firstly, the data collection was restricted to China during a particular period of the COVID-19 pandemic; the results may not be generalized to different countries and various situations. Future studies should replicate this model and collect data from different nationalities and consider specific benefits corresponding to particular situations. Furthermore, the research model can be examined through cross-cultural studies for better understanding of the variations in different cultural backgrounds.
Secondly, there were limited variables and interactions of the variables analyzed in this study-e.g., the variables selected in this study were mainly from a technology adoption aspect. Future research can put more effort into integrating the relations between variables, such as social influence affecting perceived security [15] and use technological indicators with the variables from a health and risk aspect. Meanwhile, in order to gain a deeper understanding of the mental and technological factors affecting adoption intentions with regard to novel technology, future research can incorporate research models with other variables, such as a cultural moderator, satisfaction, etc., which are also recommended in previous studies [42,63,64].
Thirdly, as the data collection period was limited, and that data were homogeneously distributed and collected through Wechat (a mobile social media application in China), in this study, the data collection process is recommended to chronically and integrally cover the users from different areas (urban and rural areas) over a different period of using M-payment in various patterns (online and offline surveys).
Finally, there was no distinction between the types of M-payment patterns (such as SMS, NFC and QR), platforms of M-payment (such as Apple pay, Samsung pay, Wechat pay, and Alipay) and patterns of electronic transaction (such as, electronic transaction via computer, electronic transaction via mobile device). Therefore, a future study can focus on distinguishing the different payment methods or payment platforms of M-payment techniques in accordance with specific research objectives.
Conclusions
In conclusion, we proposed a theoretical adoption model integrating UTAUT with perceived benefits from MAT and two additional variables, trust and perceived security, to appropriately explain the mental and technological factors affecting users' behavioral intentions of adopting M-payment during the COVID-19 pandemic in China. This research model provided extensive explanatory power when explaining that users' payment habits had changed due to the influence of pandemic, and that adoption intentions of M-payment were determined by technology perceptions and mental expectations conjointly. Performance expectancy, perceived benefits, social influence, trust and perceived security are significant in facilitating users' adoption intentions of M-payments during the COVID-19 pandemic. Specifically, the contactless characteristic of the M-payment technique is beneficial in maintaining social distancing and protecting personal safety under a pandemic. This study also explored new causal relationships and found that perceived benefits are significantly determined by social influence and trust. Moreover, performance expectancy is influenced by effort expectancy and trust, towards explaining users' behavioral intentions of using M-payments during the COVID-19 pandemic.
Furthermore, this study provides several significant theoretical and practical contributions on on investigating novel technology adoption in a particular situation, which contributes to the knowledge and understanding of the extension of the UTAUT application, explaining that users' payment habits have changed because of the pandemic and adoption intention of M-payment is determined by users' technological perceptions and mental expectations. In addition, this study recommends that researchers and relevant stakeholders focus on a particular characteristic of M-payments that corresponds with the pandemic, which can influence the perceived mental and technological benefits of the user. Understanding users' behaviors is an efficient way to analyze new technology adoption and develop an appropriate strategy for optimizing users' experiences. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 9,182.6 | 2021-01-24T00:00:00.000 | [
"Economics",
"Computer Science",
"Business"
] |
Sum of generalized alternating harmonic series with three periodically repeated numerators
This contribution deals with the generalized convergent harmonic series with three periodically repeated numerators; i. e. with periodically repeated numerators , where . Firstly, it is derived that the only value of the coefficient , for which this series converges, is . Then the formula for the sum of this series is analytically derived. A relation for calculation the value of the constant from an arbitrary sum also follows from the derived formula. The obtained analytical results are finally numerically verified by using the computer algebra system Maple 15 and its basic programming language.
INTRODUCTION
Let us recall the basic terms and notions.The harmonic series is the sum of reciprocals of all natural numbers except zero (see e.g.web page [4]), so this is the series The divergence of this series can be easily proved e.g. by using the integral test or the comparison test of convergence.The series Corresponding author: Radovan Potůček, University of Defence, Faculty of Military, Technology Department of Mathematics and Physics, Kounicova 65, 662 10 Brno, Czech Republic.E-mail<EMAIL_ADDRESS>is known as the alternating harmonic series.This series converges by the alternating series test.In particular, the sum (interesting information about sum of series can be found e.g. in book [2] or paper [1]) is equal to the natural logarithm of 2: This formula is a special case of the Mercator series, the Taylor series for the natural logarithm: The series converges to the natural logarithm (shifted by 1) whenever .
Sum of generalized alternating harmonic series with three periodically repeated numerators
We deal with the numerical series of the form where are appropriate constants for which the series (1) converges.This series we shall call generalized convergent harmonic series with periodically repeated numerators .We determine the values of the numerators , for which the series (1) converges, and the sum of this series.The power series corresponding to the series (1) has evidently the form
Very small difference both and accuracy of the sum are caused by the fact that the triplet is a fraction with small constant numerator independent of the variable .
Table 1 The approximate values of the sums of the generalized harmonic series with three periodically repeating numerators for some non-negative integers Table 2 The approximate values of the sums of the generalized harmonic series with three periodically repeating numerators for some negative integers Table 3 The approximate values of the sums of the generalized harmonic series with three periodically repeating numerators for some in the fractional form Computation of 126 values , above took about 74 800 seconds, i.e. almost 20 hours 47 minutes.The relative quantification accuracies of the sums are, except the value , approximately between and .
CONCLUSIONS
In this paper we dealt with the generalized convergent harmonic series with three periodically repeated numerators , where , i.e. with the series We derived that the only value of the coefficient , for which this series converges, is , and we also derived that the sum of this series is determined by the formula .18 This formula allows determine other sums whose three periodically repeated numerators need not be , but also for arbitrary , at least one nonzero.For example, the series has the sum .Finally, we verified the main result by computing some sums by using the CAS Maple 15 and its basic programming language.These generalized alternating harmonic series so belong to special types of convergent infinite series, such as geometric and telescoping series, which sum can be found analytically and also presented by means of a simple numerical expression.From the derived formula above it follows that This relation allows calculate the value of the constant for a given sum , as illustrates the following table: We denote its sum by .The series (2) is for absolutely convergent, so we can rearrange it and rewrite it in the form If we differentiate the series (3) term-by-term, where , we get After reindexing and fine arrangement the series (4) for we obtain that is When we summate the convergent geometric series (11) which has the first term and the ratio , where , i.e. for , we get We convert this fraction using the CAS Maple 15 to partial fractions and get where .The sum of the series (2) we obtain by integration in the form From the condition we obtain hence Now, we will deal with the convergence of the series (2) in the right point .After substitution to the power series (2) -it can be done by the extended version of Abel's theorem (see [5], p. 23)we get the numerical series (1).By the integral test we can prove that the series (1) converges if and only if .After simplification the equation (6), where , we have For , because and after re-mark as , we get a simple formula sumgenhar1ab:=proc(t,a) local r,k,s; s:=0; r:=0; for k from 1 to t do r:=1/(3*k-2) + a/(3*k-1) -(a+1)/(3*k); s:=s+r; end do; print("t=",k-1,"s(",a,")=",evalf[20](s)); end proc:
Table 4
The approximate values of the constant for some sums of the generalized harmonic series with three periodically repeating numerators , where Mathematics in Education, Research and Applications (MERAA), ISSN 2453-6881 Math Educ Res Appl, 2015(1), 1 | 1,212.4 | 2015-11-17T00:00:00.000 | [
"Mathematics"
] |
Cut-and-join structure and integrability for spin Hurwitz numbers
Spin Hurwitz numbers are related to characters of the Sergeev group, which are the expansion coefficients of the Q Schur functions, depending on odd times and on a subset of all Young diagrams. These characters involve two dual subsets: the odd partitions (OP) and the strict partitions (SP). The Q Schur functions QR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_R$$\end{document} with R∈SP\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R\in \hbox {SP}$$\end{document} are common eigenfunctions of cut-and-join operators WΔ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W_\Delta $$\end{document} with Δ∈OP\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta \in \hbox {OP}$$\end{document}. The eigenvalues of these operators are the generalized Sergeev characters, their algebra is isomorphic to the algebra of Q Schur functions. Similarly to the case of the ordinary Hurwitz numbers, the generating function of spin Hurwitz numbers is a τ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document}-function of an integrable hierarchy, that is, of the BKP type. At last, we discuss relations of the Sergeev characters with matrix models.
Introduction
This month it is exactly ten years from the publication of [1,2] which introduced the commutative ring of general cutand-join operators with linear group characters as common eigenfunctions and symmetric group characters as the corresponding eigenvalues. Since then, these operators have found a lot of applications in mathematical physics, from matrix models to knot theory, and led to a crucially important and still difficult notion of Hurwitz τ -functions. A variety of further generalizations was considered, from q, t-deformations [3,4] to the Ooguri-Vafa partition functions [5][6][7][8][9][10][11] and various non-commutative extensions [12,13]. One of the most important generalizations is a construction of open Hurwitz numbers [14][15][16]: an infinite-dimensional counterpart of the Note that the original construction essentially involves the characters of linear groups and symmetric groups (another manifestation of the Schur-Weyl duality) understood as embedded into the linear group G L(∞) and the symmetric group S ∞ . However, an obvious direction of changing this group set-up remained poorly explored. In the present paper, we discuss this interesting subject with the hope that it would add essential new colors to the picture and give rise to many new applications. That is, instead of the Schur polynomials (characters of linear groups) we deal with the Q Schur functions, and instead of the symmetric groups we deal with the Sergeev groups. Immediate subjects to address within this context are now more or less standard, we list them in the table below indicating where they are discussed in this paper:
Definitions
The central role in this paper will be played by somewhat mysterious Q Schur polynomials Q R { p}, which depend only on odd time-variables p 2k+1 and only on strict Young diagrams R = {r 1 > r 2 > · · · > r l R > 0} ∈ SP (for ordinary diagrams some lines can have equal lengths, i.e. there is ≥ rather than >). These polynomials have two complementary origins: (a) They were introduced by Schur [17] in the study of projective representations of symmetric groups (b) They were identified by Macdonald [18] with the Hall-Littlewood polynomials HL R at t 2 = −1: where M R is the Macdonald polynomial (the tilde in HL denotes restriction to t 2 = −1, while the tilde over Q refers to the normalization factor, which will be changed in the main part of the paper, see (16) at the end of this section). Hereafter, we replace the parameters in the Macdonald book [18] (q, t) → (q 2 , t 2 ). (c) Their coefficients are expressed through the characters of the Sergeev group [19][20][21].
The formal definition of the Q Schur polynomials can be found in s.4.2.
Immediate corollaries
Definition (A) implies various determinant (actually, Pfaffian) formulas, definition (B) implies connection to representation theory, in particular, the ring structure: A peculiar property of symmetric polynomials from Macdonald family is that the sum at the r.h.s is restricted from the naive R 1 + R 2 ≤ R ≤ R 1 ∪ R 2 in the lexicographical ordering to a narrower sum of irreducible representations of SL N emerging in the tensor product of representations associated with the Young diagrams R 1 and R 2 : R ∈ R 1 ⊗ R 2 (for example, [2] ⊗ [1,1] does not contain [2,1], see [22,23] for definitions and details). Macdonald's observations were that (B2) HL R { p} for R ∈ SP depend only on odd time-variables p 2k+1 (B3) HL R { p} for R ∈ SP form a sub-ring, i.e. N R R 1 ,R 2 vanish for R / ∈ SP, provided q = 0, t = i and R 1 , R 2 ∈ SP.
Note that HL R { p} do not vanish for R / ∈ SP, and then they can also depend on even p 2k , thus the set of Q R { p} is not the same as the set of HL R , it is a sub-set, and a sub-ring.
One more important observation is that (B4) after a peculiar rescaling of the Macdonald scalar product [18], the restricted HL polynomials for R ∈ SP acquire a very simple norm: Actually relevant for the Q Schur polynomials is the restriction to odd times, i.e. the Young diagram in (3), which defines the monomial p = l i p i should have all the lines of odd length: ∈ OP. Therefore of crucial importance is the celebrated one-to-one correspondence between the sets of SP and OP . For example, coinciding are the generating functions = 1 + q + q 2 + 2q 3 + 2q 4 + 3q 5 + 4q 6 + 5q 7 + 6q 8 + 8q 9 + 10q 10 + · · · (5) (this is Sylvester theorem, which is a well known supersymmetric identity).
Properties: comparative list
In this paper, we extend the parallelism between Q and Schur-Macdonald calculus much further: to the modern fields of integrability and cut-and-join W -operators. Surprisingly or not, the next step, towards Virasoro-like constraints and matrix/network models fails, at least at the naive level. This happens even if we do not insist on eigenvalue integrals with Vandermonde-like measures, but use a "softer" definition of [24][25][26], making partition function Z directly of characters. The reason for this is a puzzling at the moment. A comparison table of properties looks as follows: note that the Schur and Hall-Littlewood polynomials are two unrelated subsets in the Macdonald family. The Q polynomials belong to the second subset, but are the ones that look most similar to the first one. For reader's convenience, we provide a short list of the first restricted HL polynomials in the Appendix. An additional mystery comes from the apparent relevance of the shift Shift : r i − i −→ r i (6) in many formulas for Schur polynomials: it helps to convert them into formulas for Q. However, it is not just this substitution, some other things should also be adjusted, their is no a universal conversion rule. In fact, the shift a sort of converts the ordinary Young diagrams into the strict ones, but again not quite: the image is not always a Young diagram. Still, when it is, the shifted diagram belongs to SP.
The difficulties with matrix model formulation seem related to the old problem of finding a matrix model with only odd time-variables. Originally it was related to the matrix model solutions of KdV (rather than KP) hierarchy, and a possible solution was provided by the Kontsevich model, at the price of making an a priori non-obvious "Fourier/Miwa transform" from time-variables to "the external field". We are still lacking a clear understanding of this procedure, which remains a piece of art, and problems with the Q Schur polynomials seem to be a manifestation of this lacuna in our knowledge. There are numerous claims that the BKP hierarchy, in variance with the KdV one, is easier to describe by matrix models, but we did not manage to find a Q-based matrix model on this way.
Hamiltonians
As the Macdonald polynomials, HL R are eigenfunctions of the Calogero-Ruijsenaars-like Hamiltonian 1 (S R { p k } denotes here the Schur polynomial, which is a symmetric function of variables x i , as a function of power sums which, for q = 0 and t 2 = −1, reduces tô where the first Schur polynomial depends on odd times only, while the second one involves derivatives w.r.t. all times. All eigenvalues trivialize to The Hamiltonian (7) is actually a difference operator, since it involves shifts of p-variables, but the Macdonald polynomials are also eigenfunctions of differential W -operators, which, however, look more involved [28]. The operators (8) are nicely acting on Q R , which depend only on odd times. However, there is a conspiracy allowing them to act properly also on the other HL R , with R / ∈ SP. In fact, the Hamiltonian (7) becomes the Ruijsenaars one in terms of the Miwa variables, p k = n i x k i . Moreover, one can write down a set of n integrable Ruijsenaars Hamiltoni-ans in these variables as difference operators acting on the functions of n variables x i aŝ where In particular, where λ R is given in (7). Note that the Hamiltonians (10) still depend on the parameter q even at the point t = q, while the eigenfunctions, which are the Schur polynomials, do not. This allows one to bring q to zero, obtaining from the difference Hamiltonians the differential ones, which are nothing but the Calogero Hamiltonians. 2 In the Hall-Littlewood case q = 0, the Hamiltonians reduce tô which means that the corresponding x i at the r.h.s are just put zero. The generating function of the eigenvalues is, in this case, In particular, upon putting t 2 = −1, one obtains Similarly, in order to obtain the Jack polynomials from the Macdonald ones, one can bring both t and q to zero together, keeping β := log t/ log q finite. In this case, one still obtains the Calogero Hamiltonians with β being the coupling constant.
where ξ R := n − l R 2 (15) and [. . .] denotes the integer part. As soon as these eigenvalues depend only on the number of lines in the Young diagram R, they essentially differ from the cut-and-join operators of s.5.
Application to Hurwitz numbers
This will be the main topic of the text below, and the final summary will be given as a comparative table in Sect. 9.
Here we enumerate the main technical statements, which are discussed in the middle part of the text.
1. Interplay between the skew symmetric functions and finite group characters. 2. An equivalence of the two definitions of the Hurwitz numbers: through the enumeration of ramified coverings ("a geometric definition") and through the Frobenius formula via the symmetric or Sergeev group characters and Schur functions ("an algebraic definition"). 3. An expression for the skew counterpart of d R in (30) through the (shifted) symmetric functions. 4. A relation of integrability with the theory of symmetric functions. 5. The theory of cut-and-join operators W .
In Sect. 3 we remind all these issues for the ordinary Hurwitz numbers, and the remaining sections describe their direct counterparts in the spin Hurwitz case.
Notation
Below in the text we use the normalization so that the polynomials Q R below have unit norm w.r.t. (3).
Geometric set-up
The Hurwitz number [29][30][31][32] is a weighted number of globally topologically different branched coverings with the same topological behavior in neighborhoods of critical values. We will consider only coverings over sphere S 2 . Then a branched covering is given by a continuous map ϕ : P → S 2 , where P is a (not obligatory connected) compact surface. There exists only a finite number |Aut(ϕ)| of homeomorphisms f : P → P such that ϕ f = ϕ. At almost every point s ∈ S 2 , there are mapped exactly d points from P. The number d is called degree of ϕ. The remaining points are called critical values. There exists only a finite number of critical values. Let x 1 , . . . , x l be all points of P that map to a critical value s ∈ S 2 . Running round x i singly is mapped by ϕ to running round s δ i times. Moreover, δ 1 +· · ·+δ l = d. The ordered integers δ i represent a partition of d, which gives rise to the Young diagram s = [δ 1 , . . . , δ l ] of degree d. The diagram s is called to be of a topological type s.
Consider now the set V ( 1 , . . . , k ) of all branched coverings with critical values s 1 , . . . , s k ∈ S 2 of topological types 1 , . . . , k . We call the coverings ϕ 1 : P 1 → S 2 and ϕ 2 : the sum being taken over a maximal set of essentially differ- It is possible to prove that this number depends only on the Young diagrams 1 , . . . , k . The classical Frobenius formula gives a combinatorial expression for the Hurwitz numbers [29][30][31][32], where [ ] is the number of permutations of the cyclic type , i.e. the number of elements in the conjugacy class of the symmetric group S d given by the Young diagram , | | = d; ψ R ( ) is value of the character ψ R of the representation R of the symmetric group S d on the permutation of cyclic type , ψ R (1) is the value on the permutation with all unit cycles, , and the sum is taken over all characters of irreducible representations of S d . A definition of more general Hurwitz numbers can be found in [33,34].
Schur functions and their properties
The main tool to deal with the Hurwitz numbers and their generating functions is the symmetric functions, that is, the Schur polynomials, and the characters of symmetric groups [18,35].
The Schur polynomials are constructed in the following way. First of all, let us define a set of functions P n by the generating function n P n z n := e k p k k z k (19) Now we define the Schur symmetric function for any Young diagram R with l R lines: The Schur functions are orthogonal with the scalar product The Schur functions also satisfy the Cauchy formula S R { p} form a full basis in the space of polynomials of p k and thus form a closed ring. Let us introduce the Littlewood-Richardson coefficients N R 3 Then, the skew Schur functions S R/P , defined as are given by The following formulas involving the skew functions are also correct: • the Cauchy formula • the expansion formula
Frobenius formula
Now we can discuss a combinatorial formula for the Hurwitz numbers [29][30][31][32]. First of all, we need the character of symmetric group in the representation R, which value on the element from the conjugacy class , ψ R ( ) is the coefficient of the Schur functions [35] where we denote n = |R|, p := i p i = k p m k k , i.e. m k is the number of lines of length k. The number of elements in the conjugacy class of is | |!/z , where z := k k m k m k ! is the standard symmetric factor of the Young diagram (order of the automorphism) [35], while the dimension of the representation R of the symmetric group As any characters, ψ R ( ) satisfy the orthogonality conditions: Note that the Littlewood-Richardson coefficients are expressed through the characters ψ R ( ) as where 1 + 2 denotes the reordered union of all lines of the two diagrams. Now the Hurwitz numbers are given as follows (g is the genus of the base, | i | = d) [29][30][31][32] where following Frobenius we introduce φ R, : . This formula at g = 0 agrees with (18), since 3.3 Cut-and-join (W-) operators and Young diagram algebra One can naturally associate with Hurwitz numbers a set of commuting differential operators. These operators generalize the cut-and-join operator of [38], which is the simplest one in the whole set, and are constructed in the following way [1,2,39]. They are originally invariant differential operators on the matrices M from G L(∞), so that the time-variables p k = Tr M k , and the eigenvalues of the matrices are related with p k by the Miwa transformation. Then, the generalized cut-and-join operators arê and The normal ordering in (36) implies that all the derivatives ∂ M stand to the right of all M. Since W are invariant matrix operators, and we apply them only to invariants, they can be realized as differential operators in p k [1,2]. In particular, the simplest cut-and-join operatorŴ [2] , [38] iŝ Another example iŝ An essential property of these generalized cut-and-join operators is that they form a commutative family with the common eigenfunctions being the Schur functions: What is important is that one can lift in this formula the restriction | | = |R|, then, one immediately obtains for the diagram containing r unit cycles: whereˆ Note that the commutative family of the generalized cutand-join operators gives rise to the associative algebra of Young diagrams: Note that this algebra was first constructed in [40] just in terms of φ R, . Indeed, using (41), one can immediately translate (43) into terms of the vectors φ R, in the space of representations R of S ∞ : Still, the fact that the structure constants C 1 2 are independent of R follows in the simplest way from the algebra of commuting cut-and-join operators.
φ R, and shifted Schur functions
In accordance with formula (42), φ R, expresses through ψ R ([ , 1 |R|−| | ]). In its turn, the latter can be expressed [40] through the shifted Schur functions [41]. Indeed, an explicit formula for ψ R ([ , 1 |R|−| | ]) involves the skew Schur functions at the special point p k = δ 1,k : This formula follows from the manifest expression for the skew Schur functions, (26) through the Littlewood-Richardson coefficients and the manifest expression (33) for these latter. Then, using the expansion (29) for the Schur function and repeating several times the orthogonality relation of the symmetric group characters (32), one immediately obtains (45).
The quantity S R/μ {δ 1,k } can be expressed through the shifted Schur functions S * μ (R) [41]. The shifted Schur functions are symmetric functions of the n variables x i −i and can be defined either through the sum over the reverse semi-stable Young tableaux T , which entries strictly decrease down the column and non-strictly decrease right in the row, or through the determinant where (x; n) := n−1 k=0 (x − k) = x!/(x − n)!. In the limit of large x i , (x; n) → x n , and formula (47) reduces to the formula for the standard Schur polynomials Hence, the standard Schur polynomials are the large x iasymptotics of the shifted ones. Equivalently, the shifted Schur functions can be also unambiguously expressed through the shifted power sums if one requires Now one can use [41, formula (0.14)] 3 in order to obtain finally and One can definitely consider more than two sets of Young diagrams 1 , 2 and accordingly more times variables, however, the standard integrability will not persist in those cases. Now, one can further define Z g (β; p,p) := n q n Z g,n (β; p,p) and we will restrict ourselves only to the genus zero. At last, we use the continuation (42) of φ R, to |R| = | | and consider more than one in order to obtain finally (see details in [1,2]) where we have fixed a set of { i } and rescaled qp k → p k . Now one may ask when the generating function (56) is a τ -function of the KP hierarchy (or, more generally, the Toda hierarchy) w.r.t. to each set of time-variables p k andp k . First of all, the Schur function satisfies the KP equation: and, in fact, the entire KP hierarchy. This is nearly obvious from the fermionic realization of characters [42][43][44][45] but in the ordinary formulation looks like a set of non-trivial identities. Linear combinations while the generating function of the whole hierarchy is written in terms of the generating parameters y k as where P k are the polynomials (19) and It was first proved in [47] that the partition function (56) solves the KP hierarchy w.r.t. each set of time-variables p k andp k if the sum in the exponential, i β i φ R, i is an arbitrary linear combination of the Casimir operators, A particular case of this claim [48] is the case of only one = [2], since φ R, [2] is associated with C 2 (R): The τ -functions of this kind are called hypergeometric [49,50]. However, higher φ R, are not linear combinations of C k (R). The proper combinations of C k (R) are nicknamed the completed cycles. Hence, the final claim is [47]: only the generating function (56) with the completed cycles gives a τ -function of the KP hierarchy More details and discussion of other cases can be found in [12,51].
Matrix models and character expansions
One can rewrite the generating function (56) in the cases, when it is a τ -function, in the form [12,49,50] with the function w R being the product since exponential of any linear combination of C k (R) can be presented [12, sect.3] as w R with some function f (x). For instance, e βC 2 (R) = i, j∈R e β(i− j) . It turns out that the generating functions (64) are sometimes partition functions of matrix models. For instance, the partition function of the rectangular N 1 × N 2 complex matrix model is [13,24,25] where is dimension of the representation of SL N group given by the Young diagram R. Similarly, the partition function of the Gaussian Hermitean matrix model is [13,24,25] Both these partition functions are known to be τ -functions of the KP hierarchy (and the Toda chain hierarchy) [13,[52][53][54][55], which is evident from the results of the previous section: the both partition functions can be presented in the form (64) withp k = N 2 in (66) andp k = δ 2,k in (68), and the weight function w R of the form
Geometric set-up
Spin Hurwitz numbers are similar to the classical Hurwitz numbers adapted to coverings with spin structures [36,37]. Spin bundle was defined (under the name of thetacharacteristic) by B. Riemann as a bundle over Riemann surfaces such that its tensor square is the cotangent bundle [56,57]. The spin bundle on a surface P has an equivalent topological description using a quadratic form (Arf-function) ω : H 1 (P, Z 2 ) → Z 2 [58][59][60]. In this case, the definition of spin Hurwitz number is where the sum is taken over a maximal set of essentially different coverings from It is possible to prove that this number depends only on the Young diagrams 1 , . . . , k . It follows from (70), [37,61,62] that
Schur Q-functions and their properties
A counterpart of the Schur functions which allows one to construct a combinatorial formula for the spin Hurwitz numbers similar to the Frobenius formula (34) is the system of symmetric Schur Q-functions [17,18]. These functions were originally introduced by Schur on the projective representations of the symmetric groups and turn out to induce characters of the Sergeev group [19][20][21]63]. They can be obtained from the Hall-Littlewood polynomials HL R (t) [18], Hereafter, SP (strict partitions) denotes a set of Young diagrams with all lengths of lines distinct. However, there is a manifest way to construct them. To this end, let us define a set of functions Q n,m by the generating function n,m Q n,m z n 1 z m 2 := e 2 k p 2k+1 It is a power series in both z 1 and z 2 , since the exponential in (73) is equal to 1 at z 2 = −z 1 . Moreover, Q n,m = −Q m,n , i.e. the matrix Q i j := Q R i ,R j associated with a Young diagram R is antisymmetric. The indices of the matrix run from 1 to l R for even l R and from 1 to l R + 1 for add l R , i.e. we add a line of zero length to the Young diagram with odd number of lines, Q 0,n being non-zero. Now we define the Q-Schur symmetric function via the Pfaffian of Q: With this normalization, for the Q-functions are orthogonal: with The Cauchy formula acquires the form Q R { p} form a full basis in the space of polynomials of p 2k+1 and thus form a closed ring. Since Cauchy formula is true and the norms of Q are unities, the skew-functions Q R/R are defined directly through the structure constants of the ring [22,23,64]. Namely, introduce the Littlewood-Richardson coefficients N R 3 R 1 R 2 in the standard way Then, the skew Q-Schur functions Q R/P , defined as are given by The usual formulas involving the skew functions are also correct: the Cauchy formula and the expansion formula
Frobenius formula
Now we are ready to discuss a combinatorial formula for the spin Hurwitz numbers [37,61,62]. First of all, we associate the characters of the Sergeev group R ( ) with the coefficients where OP (odd partitions) is a set of Young diagrams with all lengths of lines odd. 4 These coefficients plays for the spin Hurwitz numbers the same role as do the characters 4 The both SP and OP have the same dimensions as can be seen from their generation functions: the generation function of number of SP at a given level n is equal to n (1 + q n ), while that of OP is n (1 − q 2n+1 ) −1 , and these two products are equal to each other. of symmetric groups for ordinary Hurwitz numbers. Their particular values are: As any characters, they satisfy the orthogonality conditions: Note that the Littlewood-Richardson coefficients for the Qfunctions are expressed through the characters R ( ) in the usual way where 1 + 2 denotes the reordered union of all lines of the two diagrams. We will also need the quantity which is a counterpart of the standard d R and regulates the dimension of representation R of the Sergeev group It is manifestly given by which is a counterpart of the hook formula (30). It is non-zero only for R ∈ SP. Now the spin Hurwitz numbers for the genus g base with the spin structure ω such that Arf(ω) = p are given as follows (| i | = d) [37,61,62] where ∈ OP, l R is the number of lines in the Young diagram R, and R ( ) := R ( )/(z d R ). As compared with (18), this formula contains an additional sign factor and additionally depends on the parity p ∈ Z/2Z [56][57][58][59][60]. The surface = S 2 has only even spin structure and, therefore, the Hur (1) 0,n does not exist. However, all formulas can be smoothly extended also to this case, [61,62]. Then, this formula at g = 0 agrees with (18), since In particular,
Cut-and-join (W-) operators and Young diagram algebra
W -operators are again defined as graded differential operators in time variables p k with common eigenfunctions being Q Schur functions. They are labeled by ∈ OP so that the order of the operator is | |. One can immediately check that the first operators are (94) and the first non-trivial one is at the third level They are Hermitian with The eigenvalues of W [1 k ] on the eigenfunction Q R are equal to These eigenvalues are nothing but lifting of R, to |R| = | | similar to (42): for the diagram containing r unit cycles: At the same time, the eigenvalue ofŴ (3) is equal to This eigenvalue is not equal to R ([3, 1 |R|−3 ]) (93), as it was in the ordinary Hurwitz case (42): This means that, in order to construct W [3] , one has to add W [1,1] to W (3) so that the eigenvalue of W [3] would become exactly R ([3, 1 |R|−3 ]): Such "corrected" cut-and-join operator generates the spin Hurwitz numbers, however, the operatorŴ (3) , (95) instead generates the BKP τ -function, see the next section, i.e. provides a counterpart of the completed cycle. At the first five levels, one can obtain the following values of R ( ): Linearly combining R ( ) with different 's, one can easily cook up the expressions of the form which are counterparts of the completed cycles in the ordinary, non-spin case.
R, and symmetric functions
Similarly to (45), one can prove that where the sum runs over the strict partitions. Now one could try to express Q R/μ (δ 1,k ) through the shifted symmetric functions. Let us start with the shifted Macdonald functions [65], which are symmetric functions of the n variables x i t −2i and can be again defined through the sum over the reverse semi-stable Young tableaux T , where ξ T (q, t) are the same coefficients (rational functions of q and t) as in the usual (non-shifted) Macdonald polynomials.
Equivalently, they can be also unambiguously expressed through the shifted power sums Now one would have to put q = 0 and t 2 = −1 in the shifted Macdonald polynomials and consider only the strict partitions in order to obtain the Schur Q-function. However, one immediately realizes that the requirement (107) becomes too singular, when one puts q = 0 and t 2 = −1, and, besides, the Schur Q-functions would become symmetric functions in variables (−1) i x i . Instead of this, we consider the usual symmetric functions of variables x i , or functions of variables p k = x k i , and define Then, as a counterpart of (51), we obtain in order to obtain finally This formula can be immediately recast into an explicit expression for R, : which gives (102) in particular examples.
These bilinear relations has an evident solution with an arbitrary function F(x). This describes a counterpart of the hypergeometric τ -functions, i.e. τ -functions of the form (116) that satisfy the hierarchy equations w.r.t. to the both sets of time-variables, in the BKP case. Numerous discussions of the BKP hierarchy and related issues can be found in [70][71][72][73][74]. Note that, similarly to (42), one can continue the Sergeev characters to |R| = | |. However, in variance with (63), the lowest non-trivial Sergeev character R ( [3]), (93) is not of the form (121), because of the second mixing term, and, hence, does not give rise to a τ -function. At the same time, the eigenvalue of the first non-trivial cut-and-join operator (99),˜ R ( [3]) is a linear combination, and can be used in (112) in order to obtain a τ -function.
Also note that the formulas for W R , and many similar ones, in the spin case involve the quantities R i , while the same formulas in the non-spin case, R i −i. This is because the shift R i − i effectively makes the partition R i strict, and, in the spin case, the partitions are strict from the very beginning. A particular manifestation of this phenomenon is also seen from the sum over the Young diagrams with restricted numbers of lines which is a τ -function of the BKP hierarchy w.r.t. time variables p k and a τ -function of the KP hierarchy w.r.t. timevariablesp k . HereR is the strict partition made from R: R i = R i − i.
Matrix models and the character expansions
Similarly to the KP case, one can study the sums over the Qfunctions of types (66) and (68) in the spin case in attempt to associate them with matrix model partition functions. Hence, we look at the series with some fixed r . However, this sum is not a τ -function of the BKP hierarchy at all, which is not surprising, since is not a weight of the proper form (as was the case in the Hermitean Gaussian matrix model), (121). Neither Q R { p k = N } makes any sense of a representation dimension. Moreover, one also should not expect distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . | 7,914.2 | 2020-02-01T00:00:00.000 | [
"Mathematics"
] |
Fabrication-induced even-odd discrepancy of magnetotransport in few-layer MnBi2Te4
The van der Waals antiferromagnetic topological insulator MnBi2Te4 represents a promising platform for exploring the layer-dependent magnetism and topological states of matter. Recently observed discrepancies between magnetic and transport properties have aroused controversies concerning the topological nature of MnBi2Te4 in the ground state. In this article, we demonstrate that fabrication can induce mismatched even-odd layer dependent magnetotransport in few-layer MnBi2Te4. We perform a comprehensive study of the magnetotransport properties in 6- and 7-septuple-layer MnBi2Te4, and reveal that both even- and odd-number-layer device can show zero Hall plateau phenomena in zero magnetic field. Importantly, a statistical survey of the optical contrast in more than 200 MnBi2Te4 flakes reveals that the zero Hall plateau in odd-number-layer devices arises from the reduction of the effective thickness during the fabrication, a factor that was rarely noticed in previous studies of 2D materials. Our finding not only provides an explanation to the controversies regarding the discrepancy of the even-odd layer dependent magnetotransport in MnBi2Te4, but also highlights the critical issues concerning the fabrication and characterization of 2D material devices.
Introduction
The antiferromagnetic (AFM) topological insulator (TI) MnBi2Te4 provides promising opportunities for exploring various quantized topological phenomena [1][2][3][4][5][6] .As a layered A-type antiferromagnet, MnBi2Te4 bulk crystal is composed of septuple layers (SLs) stacked along the c-axis with intralayer ferromagnetic (FM) order and interlayer AFM order (Fig. 1a).The interplay between magnetic order and band topology gives rise to gapped surface states that exhibit half-quantized surface Hall conductivity σxy = 0.5 e 2 /h, where h represents the Plank constant and e denotes the elementary charge 7,8 .Therefore, depending on the magnetizations of the top and bottom surfaces, few-layer MnBi2Te4 with different SL-number-parity exhibits distinct topological quantum states 9 .In odd-number-SL MnBi2Te4, the parallel magnetization on the two surfaces gives rise to the quantum anomalous Hall (QAH) effect 3,10 characterized by quantized Hall resistivity (ρyx) and vanished longitudinal resistivity (ρxx) at zero magnetic field (H).In contrast, even-number-SL MnBi2Te4 displays a robust zero Hall plateau ρyx = 0 and large ρxx in a wide range of both μ0H and gate voltage (Vg), as the counter-propagating Hall currents in the two surfaces cancel out [11][12][13] .Because the zero Hall plateau with Chern number C = 0 is closely related to the topological magnetoelectric effect that stems from the axion electrodynamics [14][15][16] , magnetic TI with antiparallel magnetizations of two surfaces is widely believed as an ideal system for realizing the axion insulator state 5,[17][18][19] .Recently, using a circularly polarized light, the axion electrodynamics has been detected in a 6-SL MnBi2Te4 in the zero Hall plateau regime 20 .
Despite the experimental demonstration of the QAH effect and the axion insulator state, the fabrication of high-quality MnBi2Te4 devices with expected quantized properties remains a key challenge.In previous experiments, most odd-number-SL MnBi2Te4 exhibited a small AH effect that is far from quantization 3,21,22 , while even-number-SL devices usually exhibited linear normal Hall effect with negative slope in the AFM regime 4,23 .More puzzlingly, recent magnetic and transport measurements [24][25][26][27] found that the AH effect disappeared in some oddnumber-SL MnBi2Te4 with uncompensated AFM order, whereas a pronounced AH hysteresis occurred in some even-number-SL devices with fully compensated AFM order.Interestingly, the chirality of the AH hysteresis is opposite to the expected clockwise chirality for Mn-based TIs [28][29][30] .These counter-intuitive results have aroused widespread controversies regarding the topological nature of MnBi2Te4 in the AFM state, significantly impeding the explorations of other exotic topological quantum phenomena in topological antiferromagnets [14][15][16]31 . Sevral distinct scenarios have been proposed to account for these anomalies, such as the competition between intrinsic and extrinsic mechanisms of AH effect 32 , the magnetoelectric effect from the orbital magnetization 33 , and the layer-dependent hidden Berry curvature 34 .However, all the ideas assume MnBi2Te4 crystals with perfect sample qualities and electronic structure.As has been demonstrated by experiments [35][36][37] , even starting with the most optimized crystal, the electronic structure of a fabricated device may change dramatically, which is a critical issue in Bi2Te3 family TI materials.Theoretical calculations also suggested that the surface defects can result in redistribution of the surface charge from the first layer toward the second layer 38 , which will modify the magnetotransport performance of few-layer MnBi2Te4.Consequently, a promising yet unexplored research direction is to elucidate whether the fabrication process can lead to distinct transport behaviors in MnBi2Te4, which may offer a novel perspective for resolving the discrepancies in previous experiments.
In this work, we report systematic magnetotransport studies and the evolution of optical contrast (Oc) on 223 MnBi2Te4 devices with varied thickness.All the seven transport devices (from 5 SL to 8 SL) manifest quantized ρyx ~ h/e 2 in the field-polarized Chern insulator state, suggesting the high quality of our MnBi2Te4 devices.We demonstrate that fabrication process can result in mismatched even-odd layer dependent magnetotransport in few-layer MnBi2Te4.
A comprehensive study of the magnetotransport behaviors in a 6-and 7-SL device shows that both even-and odd-number-SL MnBi2Te4 can exhibit zero Hall plateau in zero magnetic field.
A statistical survey of the Oc in more than 200 MnBi2Te4 reveals that the effective thickness for magnetotransport could decrease by 1 SL after undergoing the electron-beam-lithography (EBL) method.Our finding not only provides an explanation to the controversies concerning the even-odd discrepancy of magnetotransport in few-layer MnBi2Te4, but also highlights the critical issues regarding the fabrication and characterization of 2D material devices.
Device Fabrication and Basic Calibration of Transport Properties
MnBi2Te4 few-layer flakes were prepared via mechanical exfoliation on 285 nm SiO2/Si substrates (see methods section).We then determined the thickness of the flakes using optical methods (Fig. 1b), atomic force microscopy (Fig. 1c) and scanning superconducting quantum interference device (SQUID) (see supplementary section A).The calibration of thickness was also examined by additional layer-dependent measurements on flakes exfoliated from crystal #1, including nonlocal transport, scanning microwave impedance microscopy (sMIM), ultrafast pump-probe reflectivity, and Raman spectroscopy 13,[39][40][41] .By conducting Oc measurement immediately after exfoliation in a glovebox, one can quickly determine the thickness without exposing the sample to the atmosphere.Figure 1d summarizes the one-to-one correspondence between Oc and thickness (SL number), which are highly consistent with the results measured in different crystals by another group 23 .For few-layer MnBi2Te4, a remarkable feature is that Oc changes its sign from negative to positive when the thickness increases from 6 SL to 7 SL, as guided by the dashed line.After the identification of thickness, the flakes were fabricated into field-effect-transistors by standard EBL method and coated with a layer of Polymethyl Methacrylate (PMMA) for protection (see supplementary section A for details).To study the layer-dependent transport properties, we first measured the temperature (T) dependent ρxx for a 6-SL and 7-SL device (S2 and S6) at μ0H = 0, with the Fermi levels (EFs) gated to the charge neutrality points (CNPs).Both the two flakes were derived from crystal #1.At the CNP, the transport is mainly conducted by the topological surface states or edge states.Therefore, both devices exhibit overall insulating behavior and display kink feature at their Néel temperatures (TNs).Compared to TN ~ 25 K for MnBi2Te4 bulk crystals 5 , the TNs for the few-layer devices are suppressed to 20.6 K and 21.6 K, respectively, possibly due to the enhanced fluctuations at lower dimensions.
Layer-dependent Magnetoelectric Transport Properties for Varied Vg
As a layered AFM TI, the most intriguing feature of MnBi2Te4 is the layer-dependent transport properties.We performed systematic μ0H dependent transport measurements on the two devices at different Vgs (see supplementary section B for transport data at various Ts), as presented in Figs.2a and 2b.With the application of Vg, EF is continuously tuned from the valence band towards the conduction band, manifested by the slope change of normal Hall effect from positive to negative.For the 6-SL MnBi2Te4, the most remarkable feature lies in the broad zero Hall plateau in the low-field AFM regime when its EF is tuned within the band gap.In the panels enclosed by thick magenta boundaries, the zero Hall plateau persists in a wide range of Vg from 36 to 49 V.Meanwhile, ρxx shows insulating behavior and reaches as high as 4 h/e 2 .These behaviors are indicative of the axion insulator state in even-number-SL MnBi2Te4, where the counter-propagating surface Hall currents give rise to a broad zero Hall plateau in ρyx and a large ρxx (ref. 5,12,13).An out-of-plane μ0H drives the system into a Chern insulator at the CNP (Vg = 42 V), where ρyx is quantized in h/e 2 and ρxx drops to zero for μ0H > 6 T.These behaviors are consistent with previous reports on the topological phase transition between axion insulator and Chern insulator in a 6-SL device 5,20 .
Figure 2b shows the μ0H-dependent ρyx and ρxx at various Vgs for the 7-SL device, which exhibit unexpected zero Hall plateau phenomenon rather than AH hysteresis in the AFM state.
At high field, the 7-SL device show transport behaviors very similar to the 6-SL device with quantized ρyx and vanished ρxx, as the Chern insulator quantization in the FM state does not depend on thickness.However, in the low-field AFM regime, some unexpected behaviors are observed.As guided by the black dashed lines, throughout the Vg range, ρyx displays overall linear behaviors and smoothly changes the slope from positive to negative.No discernable hysteresis is observed during the field sweep process.Remarkably, at Vg = 13 V, a wide zero Hall plateau appears between μ0H = ± 3 T.Meanwhile, ρxx reaches the maximum but with a smaller value than that of the 6-SL device.Theoretically, the zero Hall plateau phenomenon is unique to even-number-SL MnBi2Te4 with fully compensated AFM order, thus should be absent in odd-number-SL MnBi2Te4.These unexpected results strongly suggest the existence of some unknown mechanism that could modify the magnetotransport of few-layer MnBi2Te4.
In order to realize the QAH and axion insulator state in few-layer MnBi2Te4, EF must be tuned by Vg to lie in the Dirac point gap opened by FM order.To reveal the nature of the zero Hall phenomena in the two devices, we extract the value of ρxx and the slope of ρyx at μ0H = 0, and plot them as a function of Vg.As displayed in Fig. 3a, ρxx of the 6-SL device first goes up to a large value of 4 h/e 2 for Vg < 25 V and remains unchanged in a broad Vg window, and then decreases to a small value for Vg > 50 V.Meanwhile, dρyx/dH exhibits a clear three-stage transition with varying Vg.In the first stage with Vg < 30 V, dρyx/dH progressively decreases with increasing Vg, and is attributed to the depletion of hole-type carriers.For Vg from 25 to 30 V, dρyx/dH changes sign from positive to negative.As Vg is further increased, a broad zero plateau forms and persists within a Vg window of 13 V.Further application of Vg injects more electron-type carriers and ultimately leads to negative dρyx/dH.Such behaviors unequivocally suggest that the zero Hall plateau state in the 6-SL MnBi2Te4 is a genuine quantized Hall state (C = 0) with EF residing in the band gap, which is consistent with our previous report 5 .
Despite the superficially similar zero Hall plateau during μ0H sweep in the 7-SL device, it manifests different behavior in response to Vg.In contrast to the 6-SL device where dρyx/dH = 0 exists in a broad Vg window, for the 7-SL device, dρyx/dH = 0 only appears at a single Vg point corresponding to the sign change of ρyx slope.Meanwhile, we notice that for the 6-SL device, there is a broad Vg range where the zero Hall plateau and the Chern insulator coexist.
However, for the 7-SL device, the zero Hall plateau only occurs in a Vg smaller than the Chern insulator regime (see supplementary section C for colormaps of ρyx and ρxx).For longitudinal transport, the Vg range for large ρxx in the 7-SL device is also narrower than the 6-SL device.
To better visualize the different manifestations of the zero Hall plateaus, we summarize the variations of dρyx/dH with Vg and μ0H for the two devices to two colormaps, as shown in Figs.3c and 3d.The magenta dashed lines label the regimes for dρyx/dH = 0.It clearly shows that there is a well-defined zero Hall resistivity plateau regime in the parameter space for the 6-SL device.However, for the 7-SL device, the zero Hall plateau exists in a narrower regime.
The quantitative differences of the zero Hall plateaus in the Vg range, as well as that in the T range (see supplementary Fig. S4), indicate different manifestations of the zero Hall plateau associated with the axion insulator state of different energy gaps.
The observation of zero Hall plateau phenomenon in the 7-SL device bears resemblance to a recent observation of the discrepancies between magnetic order and transport properties in few-layer MnBi2Te4, where the absence of AH effect was observed in a 5-SL device with uncompensated AFM order, meanwhile a pronounced AH effect was found in a 6-SL device with fully compensated AFM order 24,25 .Previous magnetic measurements have demonstrated that the AFM order in MnBi2Te4 is highly robust and persist to the top surface 42,43 .In contrast, the surface electronic band structures have been found to be fragile and sensitive to the type and concentration of defects 38,[44][45][46][47] .We notice that most of the magnetization measurements in previous reports 24,25 were performed on MnBi2Te4 with fresh surface, whereas the transport measurements were conducted exclusively in devices after fabrication.It is highly possible that the even-odd discrepancy of magnetotransport in MnBi2Te4 arises from the influences of fabrication process.To verify our conjecture, we tracked the Oc values measured before and after fabrication for the two devices, as illustrated in Figs.3c and 3d.Surprisingly, we find a substantial Oc reduction from +12.5 to -0.2 % for the 7-SL device after fabrication, indicating that the thickness determined by Oc is significantly reduced by 1 SL.In contrast, the Oc value of the 6-SL device is less influenced, only changing slightly from -7.4 to -10.0 %.
Statistical Survey of Optical Properties and its Effects on Charge Transport
In order to figure out the reason for the color change and to exclude any artificial factor that may contribute to our observation, such as the transport electrodes, fabrication conditions, and imaging parameters etc., we conducted thorough control experiments on many few-layer flakes and compared Oc changes under different conditions (see supplementary section D for details).To mitigate the potential interferences from extrinsic effects, such as thermal cycling, environmental doping, and aging effect, Oc was obtained immediately after surface treatment in a glovebox 36,37,48,49 .Of the many relevant factors, we notice that the contact with PMMA plays the most crucial role on the reduction of Oc, a factor that was rarely noticed in previous studies of 2D materials.We have performed a statistical survey on more than 200 MnBi2Te4 exfoliated from four crystals grown by different groups, and the main results are summarized in Figs.4a and 4b.The most striking observation is that most of the studied MnBi2Te4 flakes exhibit Oc reduction, although to different extents, which is never reported in previous studies of MnBi2Te4.As presented in Fig. 4b
Discussion
Based on the above experimental observations, we discuss the possible explanations for the even-odd discrepancy of magnetotransport in few-layer MnBi2Te4.It may be suspected that one physical layer is unintentionally removed during the fabrication process, leading to an odd (even)-number-SL MnBi2Te4 to manifest transport behaviors that are characteristic of an even (odd)-number-SL MnBi2Te4 with 1 less SL 24,25 .However, such scenario can be safely excluded.We performed atomic force microscopy measurement on the flakes exfoliated from the most sensitive crystal (#4).All these samples exhibit pronounced Oc reduction during the fabrication process (see supplementary section E), however their physical heights determined by atomic force microscopy remain unchanged.In supplementary section F, we also compare the variations in the magneto-optical Kerr effect (MOKE) and the coherent interlayer phonon frequency of two MnBi2Te4 before and after PMMA contact.It further demonstrates that the fabrication mainly affects the effective thickness rather than the physical thickness.Therefore, a more plausible scenario is that the change of Oc arises from the modification of the magnetic or electronic structures 45,46,50 .In the experimental researches of MnBi2Te4, it is a widespread phenomenon that MnBi2Te4 exhibits sample-dependent behaviors, whether between different crystals or different flakes exfoliated from the same crystal 3,47,51 .A prevailing understanding attributes this to the various defects and the non-uniformity within MnBi2Te4 bulk crystal.It has been highlighted that the surface defects and the perturbations to the surface can result in instability of MnBi2Te4 (refs.38,47,50-56).Given the intricate physical and chemical process involved in the fabrication process, we attribute the Oc variation to the fabrication-catalyzed instability of MnBi2Te4 surface.
It is worth noting that some imaging experiment and theoretical calculations have clearly identified some physical mechanisms that can result in a decrease of effective thickness.For instance, a scanning transmission electron microscopy imaging experiment demonstrated that the synergistic effect of a high concentration of Mn-Bi site mixing and Te vacancy can trigger a surface reconstruction process from one SL of MnBi2Te4 to a quintuple layer of Mn-Bi2Te3 and an amorphous double layer of MnxBiyTe (ref.50).As a result, the effective thickness for the MnBi2Te4 structure is reduced by 1-SL.Theoretical calculations also reveal that a surface charge redistribution process can relocate the surface state from the first SL to the second SL, resulting in the decrease of effective thickness for magnetotransport 51 .Recently, a theoretical work demonstrates that a small expansion of the interlayer van der Waals gap can result in a noteworthy reduction in the surface gap 56 .Specifically, for a (7+1) SL MnBi2Te4, it triggers a topological phase transition with Chern number change by one.An odd (even)-number-SL MnBi2Te4 will naturally manifest magnetotransport properties akin to its even (odd)-number-SL counterpart with 1 less SL.Based on the sample-dependent defect type and concentration, as well as the susceptibility of MnBi2Te4 surface to perturbations 47,48,50,51,56 , we hypothesize that the sample dependent behaviors observed during the fabrication arise from the PMMAcatalyzed surface instability.Notably, prior researches on graphene, MoS2, and WSe2 indeed suggested that the PMMA residuals on the surface influence the intrinsic properties of the 2D materials [57][58][59] .It can not only increase the observed thickness in the atomic force microscopy measurement through absorption, but also act as charge source, prompting the surface charge redistribution.Our topography measurement has indeed shown island-like PMMA residuals on the MnBi2Te4 surface (see supplementary Fig. S8).In addition, various adsorbates trapped between layers during the fabrication can also expand the van der Waals gap 60 .Therefore, it is likely that the combined influences of non-uniformity, defects, and PMMA contribute to the sample dependent behaviors in response to fabrication.Further studies are needed to fully understand the underlying mechanisms.In Figs.4d and 4e, we display the process of effective thickness reduction with the magenta frame indicating the effective thickness for transport.
The reduced gap elucidates the narrower Vg and T range of the zero Hall plateau for the 7-SL sample (S6).
While the precise mechanism through which PMMA influences the quality of MnBi2Te4 samples remains incompletely understood, a potential solution to circumvent such fabrication issue involves isolating PMMA from the surface during the fabrication.Building upon recent advancements in low-damage lithography in the QAH system 35,37 , we suggest that depositing a thin layer of AlOx on the surface of MnBi2Te4 prior to fabrication may alleviate the damage of PMMA.In supplementary section G, we present our preliminary results obtained in crystal #5, which demonstrates the efficacy of the modified method in addressing the current issue.
In addition to the zero Hall plateau in the 7-SL MnBi2Te4 device, the fabrication-induced mismatched layer dependent magnetotransport behaviors are also evident in MnBi2Te4 flakes with other thicknesses, as displayed in Figs.4f and 4g.Among the seven samples, devices S1 and S5 were derived from crystals #3 and #2, respectively.All the other devices were derived from crystal #1.Notably, those PMMA-insensitive MnBi2Te4 with less-affected Oc (blue stars in Fig. 4b) exhibit the anticipated behaviors for both even-and odd-number-SL MnBi2Te4.In contrast, samples with pronounced Oc change (red stars in Fig. 4b) exhibit transport behaviors inconsistent with their nominal thickness.Specifically, as shown in Fig. 4g, odd-number-SL devices display vanished AH hysteresis in the AFM regime, while even-number-SL devices display hysteresis behaviors with counterclockwise chirality, as indicated by the black arrows.
The AH effect with reversed chirality may arise from the electric field due to gate or substrate, or the competition between various intrinsic and extrinsic mechanisms 23,32,33,48,61 .In addition to the Hall effect, since the transport of odd-and even-number-SL MnBi2Te4 are conducted by chiral and helical edge states 13,39 , the fabrication-induced mismatched even-odd dependent magnetotransport should also be manifested by the nonlocal transport measurements, which are observed in our experiment (see supplementary section H for details).
We have conducted a comprehensive investigation of the transport properties in a large number of few-layer MnBi2Te4 flakes.By tracking the quantized Hall plateau with respect to μ0H and Vg, and comparing the optical properties before and after the fabrication process, our study elucidates the relationship between transport behaviors and device fabrication process.
Our research has uncovered a condition in which the effective thickness for charge transport in MnBi2Te4 becomes decoupled from its pristine physical thickness, which is never reported in previous studies.Although the exact microscopic mechanism underlying the change of Oc remains to be determined, and we cannot exclude that those devices exhibiting unchanged Oc are not affected by fabrication because the AH effect (0.1 h/e 2 ) in odd-number-SL MnBi2Te4 is not quantized, our experiments still provide highly valuable insights for the fabrication of high-quality MnBi2Te4 toward realizing quantized phenomena.Our finding not only explains the controversies concerning the mismatched even-odd layer dependent magnetotransport in MnBi2Te4, but also highlights the critical issues regarding the fabrication and characterization of devices based on 2D materials.
Methods
Crystal growth High-quality MnBi2Te4 single crystals were synthesized independently by different methods.For crystal #1, it was grown by directly mixing Bi2Te3 and MnTe with the ratio of 1:1 in a vacuum-sealed silica ampoule.After heated to 973 K, the mixture was slowly cooled down to 864 K, followed by a long period of annealing process.The phase and crystal structure were examined by X-ray diffraction on a PANalytical Empyrean diffractometer with Cu Kα radiation.For crystal #2, it was grown by conventional flux method.Mn powders, Bi and Te were weighed with the ratio Mn:Bi:Te = 1:8:13 (MnTe:Bi2Te3 = 1:4) in an argon-filled glovebox.The mixtures were loaded into a corundum crucible which was sealed into a quartz tube.Then the tube was then put into a furnace and heated up to 1000 °C for 20 hours.After a quick cooling to 605 °C with the rate of 5 °C/h, the mixtures were then slowly cooled down to 590 with the rate of 0.5 °C/h and kept for 2 days.Finally, the crystals were obtained after centrifuging.For crystal #3, it was grown by the conventional high-temperature solution method.The Mn, Bi and Te blocks were weighed with a ratio of Mn:Bi:Te = 1:11.3:18,and placed in an alumina crucible, which were then sealed in a quartz tube in argon environment.
The assembly was first heated up in a box furnace to 950 °C and held for 10 hours, and then cooled down to 700 °C within 10 hours and further cooled down to 575 °C in about 100 hours.
After the heating procedure, the quartz tube was then taken out quickly and decanted into the centrifuge to remove the flux from the crystals.For crystal #4, it was grown by flux method using MnCl2 as the flux.The raw materials of Bi2Te3 powder, Mn lump, Te lump and MnCl2 powder were mixed with a molar ratio of 1:1:1:0.3 and then placed in a dry alumina crucible, which was sealed in a fused silica ampoule under vacuum.The ampoule was then placed in a furnace and heated up to 850 ℃ for over 20 hours, kept there for 24 hours, cooled down to 595 ℃ in over 5 hours, kept there for 150 hours, and finally cooled to room temperature in 5 hours.After the steps above, the yielded ingot was cleaved into millimeter-sized crystals with metallic luster.For crystal #5, it was grown by directly mixing Bi2Te3, MnTe and Te with the ratio of 1:1:0.2 in a vacuum-sealed silica ampoule.The ampoule was slowly heated to 900°C at a rate of 3°C/min and maintained at this temperature for 1 hour.Subsequently, the sample was cooled at a rate of 3°C/min to 700°C, held at this temperature for 1 hour.The temperature was then gradually decreased to 585°C at a rate of 0.5°C/min and maintained for annealing for 12 days.After the annealing process, the quartz ampoule was quenched in water to avoid phase impurities.Millimeter-sized MnBi2Te4 crystals were obtained after crushing the ingot.Scanning SUIID measurement Scanning SQUID measurements were carried in a different cryostat from the transport measurements.Scanning 2-junction SQUID susceptometers with two balanced pickup loops of 2 μm diameter in a gradiometric configuration were utilized as the SQUID sensors.Each of them was surrounded by a one-turn field coils of 10 μm diameter.
Device fabrication
The DC flux was measured through the pickup loop using a voltage meter (Zurich Instrument HF2LI) as a function of position and reflects the intrinsic magnetization of the sample.
Polar MOKE measurement Polar MOKE measurements were carried using a 633 nm HeNe laser.After transmitting through a linear polarizer, the light was focused to a 2µm spot on the sample by a reflective objective at normal incidence to avoid the large backgrounds that occur when a typical lens is used.The sample was mounted on a cold stage at 3 K within the vacuum chamber of an optical superconducting magnet system.The reflected beam is modulated at ~ 50 kHz by a PEM, split by a Wollaston prism, and detected using a balanced photodiode.The resulting 50 and 100 kHz modulations detected by lock-in amplifiers then correspond to the ellipticity and rotation angle of the beam respectively.We additionally modulate the intensity of the beam with a frequency of 2317 Hz chopper to measure the DC signal for normalization using a third lock-in.
, the blue and magenta dashed lines mark the area of Oc reduction of 0 and 20 %, respectively.The flakes situated close to the blue dashed line display little Oc change after device fabrication, whereas the flakes close to the magenta dashed line experience a pronounced Oc reduction, corresponding to an effective thickness decrease of 1 SL.The subtle increase of Oc in some certain samples is attributed to measurement error (see methods).In the top panel of Fig. 4a, we present the optical images of four typical MnBi2Te4 flakes, which clearly illustrate the pronounced color change caused in the fabrication process.In Fig. 4c, we further analyze the distribution of the Oc change for the different crystals.The leftward shift of the center of the blue lines clearly indicates that the impacts of the fabrication process on Oc are highly crystal-dependent.For most of the samples exfoliated from Crystal #1, their Oc values are only slightly affected.In contrast, almost all the flakes exfoliated from Crystal #4 exhibit significant reduction in Oc, corresponding to a thickness of 1 SL.
MnBi2Te4 flakes were exfoliated onto 285 nm-thick SiO2/Si substrates by using the Scotch tape method in an argon-filled glove box with O2 and H2O levels lower than 0.1 ppm.Before exfoliation, all SiO2/Si substrates were pre-cleaned by air plasma for 5 minutes at ~ 125 Pa pressure.To minimize the experimental errors due to the subtle difference in measurement conditions, such as the position of the flakes in the light fields, the uniformity of illumination, the size and shape of the sample, and the presence of electrode, the Oc shown in the main text were calculated by averaging the Oc of different parts across the sample.For the transport devices, thick flakes around the target sample were first scratched off by using a sharp needle in the glove box.A layer of 270 nm PMMA was spin-coated before EBL and heated at 60 °C for 5 minutes.After the EBL, 23 to 53 nm thick Cr/Au electrodes (3/20 to 3/50 nm) were deposited by a thermal evaporator connected with an argon-filled glove box.Before the fabrication and sample transfer process, the devices were always spin-coated with a PMMA layer to avoid contact with air.All the seven devices (S1-S7) shown in the text were fabricated through the same process.Transport measurement Four probe transport measurements were carried out in a cryostat with the lowest temperature 1.6 K and out-of-plane magnetic field up to 9 T. The longitudinal and Hall signals were acquired simultaneously via lock-in amplifiers with an AC current (200 nA, 13 Hz) generated by a Keithley 6221 current source meter.To correct for the geometrical misalignment, the longitudinal and Hall signals were symmetrized and antisymmetrized with magnetic field respectively.The back-gate voltages were applied by a Keithley 2400 source meter.
Fig. 4 |
Fig. 4 | Statistical analysis of Oc for more than two hundred flakes and distinct thickness dependent transport properties.a, Optical images of four representative samples taken in a glove box right after exfoliation (top panel) and after the removal of PMMA (bottom panel).b, Summary of the Oc values of 223 MnBi2Te4 flakes after exfoliation and after the removal of PMMA.The blue and magenta dashed lines mark the Oc reduction by 0 and 20 %.Different colored dots represent the data acquired from different crystals.c, Distribution of Oc change in the four different crystals.For the most PMMA-sensitive crystal (#4), fabrication can give rise to Oc change corresponding to a thickness of 1 SL.d-e, Illustrations of the influence of PMMA on the surface electronic structure for a 7-SL MnBi2Te4.f-g, Thickness dependent ρyx behaviors for MnBi2Te4 without (blue) and with (red) severe Oc change. | 6,874 | 2023-06-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Testing Correspondence between Areas with Hydrated Minerals, as Observed by CRISM/MRO, and Spots of Enhanced Subsurface Water Content, as Found by DAN along the Traverse of Curiosity
Possible correlation is studied between Water Equivalent Hydrogen (WEH) in the Martian subsurface, as measured by the DAN (Dynamic Albedo of Neutrons) instrument along the Curiosity traverse, and the presence of hydrated minerals on the surface, as seen from the orbit by CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) instrument onboard MRO (Mars Reconnaissance Orbiter). Cross-analysis of the subsurface WEH values from DAN passive measurements with the distribution of hydrated minerals over the surface of Gale crater according to Specialized Browse Product Mosaics is performed for the initial 20 km part of traverse. As a result, we found an increase up to 0.4 wt% of the mean WEH value for the surface areas with the spectral signatures of polyhydrated sulfates. The increase is shown to be higher with the more prominent spectral signature on the surface. Similar WEH increase for the two other types of hydrated minerals, such as monohydrated sulfates and phyllosilicates, was not found for the tested part of the traverse. Polyhydrated sulfates being a part of the sedimentary deposits composing the surface of Gale crater should have considerable thickness that is necessary for the subsurface neutron sensing by DAN measurements.
Introduction
Gale crater was presumably formed during the late Noachian period (about 3.7-3.8 Ga) as a result of a large meteorite impact [1]. Its radius is about 150 km, and its initial depth is thought to be about 5 km. In its evolutional history from formation to the modern time, one may conditionally distinguish two main stages [2,3]. e first stage corresponds to the Noachian period with a possibly warm and humid climate on the planet (or at least with episodic warm conditions), when Mars had a rather dense atmosphere. During this stage, the crater could be occasionally filled up with water and turned into a lake, at the bottom of which weathering of primary rocks in contact with an alkaline water environment produced phyllosilicates [4,5]. e first stage ended by the beginning of Late Hesperian, when the climate of Mars became close to modern, with a thin atmosphere and a dry and cold surface. By the end of the first stage, Gale crater is thought to be filled up with layered sedimentary deposits [6].
In the second stage, the sedimentary deposits filling Gale crater were exposed, probably by wind erosion, creating Mount Sharp-5.5 km tall central mound which is not related to the central peak formed during the impact event [7]. e lowest visible units of Mount Sharp contain a variety of minerals that are indicative of aqueous conditions. Phyllosilicate (including the groups smectite, vermiculite, illite, kaolinite, serpentine, micas, and chlorite, commonly called clay minerals) spectral signatures are observed in some stratigraphic units near the base of Mount Sharp, and sulfate-bearing minerals (such as anhydrite, bassanite, gypsum, and jarosite) are observed in younger, stratigraphically higher sedimentary units [8,9].
is mineralogical transition suggests that the conditions under which the sediments were deposited changed through time. e broad mineral stratigraphy with sulfatebearing units overlying phyllosilicate-bearing units has been recognized in similarly aged deposits globally on Mars [10]. is mineralogical succession may mark the beginning of the transition from Noachian to Hesperian, e.g., from a relatively wet and warm early Mars to a very dry and cold modern Mars [8,11].
us, the sediments on the modern surface of Gale crater represent a natural record of Mars hydrological evolution, where a study of the composition and sequence of sedimentary strata from the crater floor up to the top of its central mound allows the disclosing of the changes of environmental conditions along the chronology of their formation [6]. At present, the ground water in the Martian soil may precipitate from the current thin atmosphere forming multilayers of molecules on the grains of regolith (as adsorbed water) and filling the porosity volume between grains (as free water ice). ough there is no direct evidence of ground ice, indirect evidence for the formation of frost at the surface of Gale crater exists [12]. erefore, both kinds of water might exist currently in the shallow subsurface of Gale crater: water in the form of chemically bound molecules in hydrated minerals and water as adsorbed molecules in the regolith. e presence of water in the subsurface of Gale crater is proved by the DAN active neutron sensing experiment onboard the Curiosity rover [13][14][15]. is paper presents the results of the comparative analysis of subsurface water abundance, as derived from DAN passive measurements data [16], together with data for the surface minerals distribution, as measured by CRISM onboard MRO. is analysis is thought to allow distinguishing which kind of ground water most likely exists in the subsurface along the Curiosity traverse over the bottom of Gale crater.
DAN Measurements along the Rover Traverse
e DAN instrument is an active neutron detector for sensing the subsurface layer of about 60 cm thickness by pulses of 14 MeV neutrons, produced by the pulsing neutron generator-PNG [15]. Pulses of neutrons produce the postpulse emission or Dynamic Albedo of Neutrons. Two DAN neutron counters record the time profiles of postpulse total neutron emission: CTN at thermal and epithermal energy range and CETN at epithermal energy range.
Since hydrogen in the Martian subsurface is most likely a part of either hydroxyl or water molecules, its content is conventionally measured in terms of Water Equivalent Hydrogen (WEH). On the other hand, the content of the neutron absorbing nuclei in the subsurface is evaluated by a single measurable parameter of the so-called Absorption Equivalent Chlorine-AEC [17]. Chlorine is selected because it is considered the major contributor to neutron absorption in the Martian regolith. e value of AEC takes into account not only the mass fraction of chlorine itself, but also all other absorbers in the subsurface matter, if their mass fractions differ from the values predicted by the so-called "standard composition" model of the Martian soil [18].
DAN started operating on Mars on August 12, 2012, just after the rover landing [19]. e data for the current analysis were obtained since that time until November 2018. at corresponds to 2218 sols and 19 971 m distance along the traverse. According to the rover flight rules, DAN active operations are only allowed at rover stops. us, estimates of WEH and AEC based on active measurements are available for the rover parking spots only [13]. For the first 20 km part of the rover traverse until sol 2218, the mean WEH and AEC values are found to be (2.6 ± 0.7) wt% and (1.0 ± 0.1) wt%, respectively [13].
While PNG operates only during active sessions lasting for 15-30 minutes at stops, DAN counters are working almost continuously, both at rover stops and during drives. When PNG is off, neutron counters continuously measure the local neutron emission. Flux and energy spectra of the surface albedo neutrons produced both by the Multi-Mission Radioisotope ermoelectric Generator and Galactic Cosmic Rays largely depend on the presence of hydrogen, measured as WEH, and neutron absorbers, measured as AEC, in the subsurface matter.
us, DAN continuous passive measurements give an opportunity to determine the WEH value at any particular spot below the rover along the traverse [16].
A special procedure of DAN passive data processing has been developed based on the empirically found relationships between active and passive data, measured simultaneously at the total number of 328 rover stops (see [16], for details).
is empirical relationship, as well as the knowledge on the AEC, is used for obtaining the continuous profile of WEH values. e physical size of an individual spot on the surface for passive neutron sensing is shown to be about 3 meters in diameter [17], so the physical resolution of WEH variations along the traverse could be associated with such scale.
Distances between the rover stops vary from several meters up to hundreds of meters. To estimate AEC between two stops along the traverse, one needs to make an additional assumption for the scale of AEC spatial variations. It was suggested by [16], to use two approaches. e first one assumes a long-range (LR) scaling of AEC variability, when a smooth interpolation of AEC value is thought to be applicable along the path from one stop to another. In this case, the AEC value at each point between two stops could be derived from the interpolated values of active measurements at these stops. e second approach postulates that AEC might vary at a scale of several meters or so, presuming a short-range (SR) scaling of the AEC variability. In this case, the AEC value at each intermediate point of the traverse between two stops is thought to be randomly distributed according to the entire data set of the active measurements. For the part of the rover traverse studied in this paper, the mean AEC value for all active measurements is found to be equal to (1.0 ± 0.1) wt%.
After processing DAN passive data until sol 2218, two continuous profiles of the WEH spatial variability were obtained with a distance resolution of 3 meters using LR and/or SR approaches (Figure 1). e WEH value was found to vary from around zero to 6.3 wt% [16]. ese data were used for cross-analysis with CRISM spectral data products; see below.
CRISM Data Products for Cross-Analysis with DAN Data
CRISM instrument onboard the NASA MRO spacecraft performs imaging spectrometry in the visible and near-infrared wavelength range of 362-3920 nanometers. Such chemicals as iron, oxides, carbonates, etc. on the Martian surface have characteristic spectral features in the visible and infrared ranges and are distinguishable by CRISM [20]. For the current cross-analysis with DAN data, we used the publically available Specialized Browse Product Mosaics [21]. e CRISM team specifically created this data product for studying the Curiosity landing site. Hyperspectral images with high spatial resolution of about 20 m, not degraded by the increased noise or by atmosphere opacity, were selected as the source images for creating the products. Mathematical processing was applied to reflectance values at key wavelengths associated with diagnostic or indicative mineral Advances in Astronomy structure on the surface. e resulting composites of individual parameters reflect the thematic mineralogical diversity of the surface. A high value in the image plane indicates a relatively strong spectral feature for the particular product as compared to the range present regionally around the landing site [21].
In our study, two Specialized Browse Product Mosaic products were of special interest: "HYD" and "ALT." Both were constructed from images in the IR spectral range, as they characterize the spectral features of minerals that are thought to be formed by the interaction of rocks with liquid water. ese two data products represent the surface distribution of such minerals as phyllosilicates (generally Fesmectites) and mono-or polyhydrated Mg-sulfates (mostly kieserite and hexahydrite, respectively). e "HYD" data product shows indicators of hydrated minerals with a focus on the hydrated sulfates, while the "ALT" data product focuses on Fe/Mg phyllosilicates on the surface.
It should be taken into account that hydrated minerals, which are believed to be present in the shallow subsurface, might not be revealed by detection of their spectral indicators on the surface as they might be covered by dust or by a thin upper layer of some different mineralogical composition. While DAN senses the subsurface down to 1 m depth, CRISM images the uppermost layer of the Martian surface. However, the deposits of hydrated minerals that spread from the top down to the subsurface should be detectable by both instruments, CRISM from the orbit and DAN from the surface. To test the presence of such deposits and to map them along the rover, traverse was the goal of the performed cross-analysis, as presented below.
Cross-Analysis of DAN Passive Data with CRISM Data Products
e total number of 1028 CRISM mapping pixels, located along the traverse of the rover, was selected. For each such pixel with the size of about 20 meters, the mean WEH value was evaluated inside it according to the DAN passive measurements data, processed by the LR and SR approaches (see Section 3, Tables 1 and 2). Uncertainties of the WEH mean values were derived from the uncertainties of WEH values of the contributed distance intervals.
For each of the three types of hydrated minerals, such as phyllosilicates, monohydrated sulfates, and polyhydrated sulfates, the testing groups of corresponding CRISM pixels were selected, which manifest the spectral signatures of these minerals. ree groups with 51, 45, and 101 CRISM pixels were identified for phyllosilicates, monohydrated sulfates, and polyhydrated sulfate, respectively. e reference group of 831 CRISM pixels with no spectral signatures of any of the three types of hydrated minerals was also composed. e method of cross-analysis of DAN and CRISM data is based on the comparison of the distribution of the mean WEH values for the testing group of CRISM pixels attributed to the particular testing mineral with the distribution of the mean WEH values for the reference group of CRISM pixels. As the simplest test, the average values and sample variances of WEH for the testing and the reference groups are compared. In addition, Pearson's chi-squared test is used for more precise testing of the statistical difference between them. Two distributions are thought to be statistically distinct, if the probability for their coincidence (p-level) is sufficiently small, p < 0.001.
One finds that there is no distinction of the distributions of WEH for groups of the CRISM pixels associated with either phyllosilicates or monohydrated sulfates from the reference group (Tables 1 and 2). For both cases of WEH estimation, using either LR or SR approach, the p-level values point out a rather good agreement between WEH distributions for testing and reference groups.
On the other hand, the evident effect of distinction from the reference group is found for the testing group of 101 CRISM pixels with the spectral signature of polyhydrated sulfates (Figure 2(a)). e differences between mean values of WEH are (0.2 ± 0.1) wt% for the LR approach (Table 1) and (0.4 ± 0.1) wt% for the SR approach (Table 2). According to Pearson's chi-squared test, the p-levels for the statistical coincidence between WEH distributions for the testing and reference groups are found to be much less than 0.001 (see Tables 1 and 2). us, the mean WEH value for the testing group of CRISM pixels, associated with the presence of polyhydrated sulfates, is confidently larger than the mean WEH value for the reference group of pixels, which do not have the spectral signature of such type of mineral.
It is reasonable to expect that the stronger spectral signature of the presence of minerals on the surface (seen as larger values in the RGB image plane) might correspond to a higher content of WEH detected by DAN in the subsurface. To check this assumption, the testing group of 101 CRISM pixels linked with polyhydrated sulfates was divided into two subgroups. e subgroup of "high intensity" includes pixels with the brightness >20 in the RGB image plane. Correspondingly, the subgroup of "low intensity" includes pixels with the brightness <20. e value of 20 was chosen as a dividing value for the total group into two statistically equal subgroups. For these two subgroups, the distributions of WEH were built, and their comparison with the reference distribution was performed (Tables 1 and 2). Figure 3 shows the WEH distributions for the testing subgroups of CRISM pixels with "high intensity" polyhydrated sulfates for the cases of LR (a) and SR (b) approaches. e more pronounced shift to larger values is evident for the WEH distributions of "high intensity" subgroup in comparison with the WEH distribution for the reference group (Figure 3). eir mean WEH values become equal to (2.9 ± 0.1) wt% and (3.1 ± 0.1) wt% for the cases of LR and SR approaches, respectively. Besides, the Pearson test proves that WEH distributions for the "high intensity" subgroup of CRISM pixels are confidently distinct from the one for the reference group. e values of the p-level are much less than 0.001 (Tables 1 and 2).
As we stated in Section 2 of this paper, the value of WEH is not measured directly, but is obtained through the modelling of active and passive measurement data and is, therefore, model-dependent. To exclude the probable effects of model dependency, we performed the similar analysis as described above for the initial measured parameter of 4 Advances in Astronomy neutron emission. Instead of the WEH value, we used the ratio of the count rates of total S CTN and epithermal S CETN neutrons emitted by the surface, namely, F DAN � S CTN / C CETN (for more details, see [16]). e results of the performed analysis are described in Table 3. e relationship between the parameters of the F DAN distributions is found to be similar to the same parameters of the WEH distributions (Tables 1 and 2). Only the distribution of F DAN for the testing group of CRISM pixels associated with the presence of polyhydrated sulfates is confidently different from the reference distribution of F DAN for the group of pixels, which do not have the spectral signature of hydrated minerals. So, one Advances in Astronomy had to conclude that the relationship found between the presence of polyhydrated sulfates and increase of water in the shallow subsurface is not produced by WEH deconvolution procedure, but manifests physical relation between such minerals and neutron emission.
Discussion
us, the conclusion should be drawn that, along the traverse from the landing site to the distance mark of about 20 km, the presence of polyhydrated sulfates on the surface, as observed by CRISM, is consistent with the increase of WEH values within a subsurface layer of about 60 cm thickness, as measured by DAN. On the other hand, no such phenomenon is found for another group of CRISM pixels associated with the spectral signatures of phyllosilicates or monohydrated sulfates. One may speculate that the part of the traverse, which manifests the phenomenon of CRISM-DAN cross-correspondence, is associated with the sedimentary strata containing polyhydrated sulfates probably with significant thickness that leads to enhanced mass fraction of water in comparison with the "usual" subsurface with some standard mass fraction of water. e top surfaces of such matter are observable by CRISM from the Martian orbit and their deeper volumes are detectable by DAN from the Martian surface. One suggests naming such strata as layers of polyhydrated sulfates-rich matter, or PHSR matter.
To test this simplest interpretation of the found phenomenon, one might check the possibility that the observed WEH distribution in the area with PHSR matter (Figure 2) could be modeled by a bimodal function with two distinct components: the "less-WEH" component and "more-WEH" component ( Figure 4). e "less-WEH" component could be associated with the "usual" matter. Its shape could be taken from the known distribution of WEH for the reference group of CRISM pixels with no signatures of any of the three types of hydrated minerals (Figure 2). e mean values for WEH of this component are known to be equal to (2.54 ± 0.02) wt% for the LR approach and (2.40 ± 0.02) wt% for the SR approach (see Table 1). e "more-WEH" component could be associated with polyhydrated sulfates. As the simplest option, this "more-WEH" component could be represented by a normal distribution with two free parameters: the mean value and the sample variance.
Using such a bimodal model, one may try to fit the observed WEH distribution in the area of CRISM pixels with a spectral signature of polyhydrated sulfates. In addition to the two free parameters of the "more-WEH" component, one more parameter should be used for fitting: the relative fraction α of this component with respect to the total integral Advances in Astronomy of the entire observed distribution. Two cases of the observed distributions are tested for WEH derived by either LR or SR approaches of DAN passive data processing. One-tailed Pearson's chi-squared test is applied for finding the best fitting parameters of the bimodal model (Table 4). In both cases, for LR and SR approaches, the Pearson criteria give very good p-level values for consistency between the observed distribution and the modeling function; i.e., it is between 0.001 and 0.15 (Table 4).
e "more-WEH" component has the mean WEH values equal to 3.0 or 3.6 wt % for the cases of LR and SR, respectively (Table 4 and Figure 2).
Taking into account this result, one may speculate that the two components of WEH distribution represent two fractions of PHSR matter. e "less-WEH" component corresponds to the "usual" soil and the "more-WEH" component might be attributed to deposits of polyhydrated sulfates. e bimodal approximation of the observed WEH distribution for the area of PHSR matter allows determining the fraction parameter α for the contribution of "more-WEH" component. is fraction is about 0.37 or 0.12 (as a part of 1) of the entire WEH distributions, based on either LR or SR approaches, respectively. In the case of the proposed identification, such a fraction α corresponds to the part of the subsurface volume, which contains deposits of polyhydrated sulfates with the thickness large enough for neutron sensing. Another fraction (1−α) corresponds to the "usual" soil. Interestingly, the fraction of the "more-WEH" component is approaching the value of 1 for the subgroup of 51 pixels of "high intensity" spectral signature of polyhydrated sulfates (Table 4). It is another piece of evidence for the identification of "more-WEH" component in the areas, the substances of which is the deposition of polyhydrated sulfates. e fraction of "usual" soil in the PHSR matter might exist practically everywhere along the traverse.
is substance does not contain noticeable quantities of any of the three types of hydrated minerals and has an average WEH of about 2.5 wt% (see Tables 1 and 2). e WEH value for this second fraction of PHSR matter corresponds to the number of water molecules in the structure of polyhydrated sulfates. Its value was derived from the DAN data, as about 3 wt% (Table 4). e chemically bound water of hydrated minerals was embedded in their structure long time ago, when these minerals were formed in the aqueous conditions. erefore, the type of PHS-dominated substance is thought to contain the "initial water" of Mars. e mass fraction of polyhydrated sulfates in the subsurface substance may vary along the traverse. At some spots with the most intense spectral signatures of polyhydrated sulfates, the "more-WEH" component, as shown above, may contribute the observed WEH distribution entirely. One may use the DAN passive data for testing the presence of polyhydrated sulfates in the subsurface along the traverse with the spatial resolution about hundreds meters or so.
For performing such test we split the total path of about 20 km into 132 distance intervals of 150 meters long, each including around 50 passive measurements of WEH with a spatial resolution of 3 meters. e distribution of the WEH values for each distance interval is tested by the already known bimodal function with only one variable parameter α, as the fractionation of the already known components of "less-water" and "more-water," α and 1−α, respectively. e best fitting value of α could be considered as the average mass fraction of polyhydrated sulfates in the subsurface. Performing such an analysis for all 132 distance intervals, one obtains the profile of the polyhydrated sulfates mass fraction along the traverse ( Figure 5). e profile shows that the highest value of α equal to 0.21 (the confidence is 15%) is observed at the area around a distance mark of 16,300 m. Indeed, according to the CRISM data, this area is characterized by the increased value of the spectral signature brightness for polyhydrated sulfates ( Figure 6). us, the analysis of the DAN passive data made it possible to identify sites with an increased content of polyhydrated sulfates in the shallow subsurface. One of such sites is found at the distance mark of 16,300 meters of the traverse on the way from Bagnold Dunes to Vera Rubin ridge.
Conclusions
e cross-analysis of the WEH values from the DAN passive measurements onboard Curiosity and Specialized Browse Product Mosaics of CRISM spectrometer onboard the MRO was performed for the part of the rover traverse from the landing site up to the distance mark 19 971 m. It was found that traverse intervals with the spectral signature of polyhydrated sulfates, as detected by CRISM, contain more WEH in the subsurface in comparison with the intervals that do not manifest signatures of any of the three selected types of hydrated minerals, such as phyllosilicates and mono-and polyhydrated sulfates.
is effect points out that polyhydrated sulfates exist at some places along the traverse, as layers with up to 60 cm thickness that are well detectable by DAN with its sensing depth of about 60 cm in the subsurface. e bimodal distribution of WEH was found for such distance intervals along the traverse with the spectral signature of polyhydrated sulfates. e "less-WEH" component of this distribution is consistent with the distribution observed at the dominating majority of distance intervals that do not manifest any spectral signatures of all three tested types of hydrated minerals. e average water content for this type of matter is 2.5 wt%. e "more-WEH" component of this distribution is thought to be associated with the second type of matter, whose composition is likely dominated by polyhydrated sulfates component. e value of WEH for this type is about 3 wt% or larger. e absence of any difference between WEH distributions for the distance intervals without spectral signatures of the three tested types of hydrated minerals and for distance intervals with spectral signatures of phyllosilicates and monohydrated sulfates does not necessarily make a discrepancy between CRISM and DAN observations. Indeed, the uppermost layer of the subsurface with such hydrated minerals may be seen by CRISM, but might be too thin for the detection by DAN. One may suspect that for some reason the top layers with polyhydrated sulfates are thick enough for being detected by DAN, while the top layers of phyllosilicates and monohydrated sulfates are not. e CRISM pixels with the spectral signature of polyhydrated sulfates cover about 10% of the 20 km long rover traverse that was analyzed. e fact of the large thickness of polyhydrated sulfates deposits suggests a long period of time in the past, when such layer had enough time to be accumulated. On the other hand, the areas of CRISM pixels with spectral signatures of phyllosilicates and monohydrated sulfates are most likely associated with rather thin layers on top of the ordinary rocks and soil. DAN is not sensitive to the presence of such thin layers on top of the ordinary matter. One may expect that the presence of such hydrated minerals will also be proved by cross-analysis of CRISM and DAN data, when the rover climbs up Aeolis Mons, where the deposits if phyllosilicates or monohydrated sulfates might be thick enough for the detection of the WEH increase contributed by them.
Data Availability
Data from this work are publicly accessible on the Planetary Data System, https://www.pds.nasa.gov/.
Disclosure is paper was presented at EGU General Assembly 2020 [22].
Conflicts of Interest
e authors declare that they have no conflicts of interest. Advances in Astronomy 9 | 6,378.4 | 2022-04-01T00:00:00.000 | [
"Environmental Science",
"Physics",
"Geology"
] |
Extremal surfaces in glue-on AdS/ T ¯ T holography
: T ¯ T deformed CFTs with positive deformation parameter have been proposed to be holographically dual to Einstein gravity in a glue-on AdS 3 spacetime [1]. The latter is constructed from AdS 3 by gluing a patch of an auxiliary AdS ∗ 3 spacetime to its asymptotic boundary. In this work, we propose a glue-on version of the Ryu-Takayanagi formula, which is given by the signed area of an extremal surface. The extremal surface is anchored at the endpoints of an interval on a cutoff surface in the glue-on geometry. It consists of an RT surface lying in the AdS 3 part of the spacetime and its extension to the AdS ∗ 3 region. The signed area is the length of the RT surface minus the length of the segments in AdS ∗ 3 . We find that the Ryu-Takayanagi formula with the signed area reproduces the entanglement entropy of a half interval for T ¯ T -deformed CFTs on the sphere. We then study the properties of extremal surfaces on various glue-on geometries, including Poincar´e AdS 3 , global AdS 3 , and the BTZ black hole. When anchored on multiple intervals at the boundary, the signed area of the minimal surfaces undergoes phase transitions with novel properties. In all of these examples, we find that the glue-on extremal surfaces exhibit a minimum length related to the deformation parameter of T ¯ T -deformed CFTs
Introduction
The AdS/CFT correspondence provides a nonperturbative formulation of quantum gravity in asymptotically anti-de Sitter spacetimes.Although originally formulated in string theory, many aspects of the correspondence are universal and sensitive only to its low energy (super)gravity approximation.As with many other examples of dualities in physics, the AdS/CFT correspondence can be used both ways, namely, it can be used to learn aspects of gravity from conformal field theory and vice versa.
In the early days of AdS/CFT, relevant deformations of the boundary CFT by a double-trace operator O 2 were understood to induce a change in the boundary conditions of the field ϕ dual to O [2,3].Since this deformation induces an RG flow from a UV to an IR fixed point, the asymptotically AdS metric is not affected by the deformation.A more dramatic effect is found, however, when the boundary CFT is deformed by an irrelevant operator involving components of the stress tensor, e.g. by the T T operator [4][5][6].In this case, the boundary conditions of an otherwise asymptotically AdS spacetime are changed, and mix both leading and subleading components of the metric [7].
The T T deformation of a holographic CFT 2 can be used to gain a better understanding of holography as we move away from strictly asymptotically AdS 3 spacetimes. 1Conversely, gravity can be used to gain a better understanding of holographic CFTs deformed by the T T operator.This approach is particularly appealing because it gives a geometric interpretation to the T T deformation.
When the deformation parameter µ is negative, the T T deformation induces Dirichlet boundary conditions for the metric on a hypersurface at a fixed radial distance from the origin of the spacetime [12]. 2 This is equivalent to introducing a finite cutoff in the bulk and the T T -deformed CFT can be interpreted as living at this cutoff surface.The advantage of this formulation is that many results in T T -deformed CFTs with µ < 0 can be understood geometrically as a consequence of the finite cutoff.This includes, in particular, the derivation of the spectrum, superluminal propagation, black hole thermodynamics, and its partition function [1,12].
The sign of the T T deformation plays a crucial role in the properties of the theory.For example, a negative value of µ leads to superluminal propagation and a complex spectrum at high energies.These problems are not present when µ > 0, case in which T T -deformed CFTs are well defined for arbitrarily high energies.The cutoff AdS 3 proposal is not applicable for µ > 0, however.In this case, we have put forward a new holographic proposal dubbed glue-on AdS 3 holography.In this proposal, the cutoff surface is pushed beyond the asymptotic boundary of AdS 3 into an auxiliary AdS * 3 region.The T T -deformed CFT 2 can be viewed as living on this cutoff surface [1] (see fig. 1).We have previously shown that glue-on AdS 3 holography reproduces the spectrum, subluminal propagation, black hole thermodynamics, and the partition function of T T -deformed CFTs with a positive deformation parameter.
In AdS/CFT, the Ryu-Takayanagi and Hubeny-Rangamani-Takayanagi (HRT) formulae tell us that the area of an extremal surface attached to the boundary of an interval at the AdS boundary yields the entanglement entropy of that interval in the dual CFT [13,14].The HRT formula is an example of how geometry encodes features of the dual CFT and it has been instrumental in shaping our understanding of the emergence of spacetime.It is therefore natural to ask what role do HRT surfaces play in the glue-on AdS 3 /T T correspondence.This question is particularly interesting because, with the exception of the µ < 0 results on the sphere and on the plane [15,16], a general non-perturbative derivation of the entanglement entropy of T T -deformed CFTs is still lacking (see [17][18][19][20][21][22][23][24][25][26][27] for related perturbative and nonperturbative approaches).
In this paper we study extremal surfaces in glue-on AdS 3 spacetimes.Given an interval A on the cutoff surface, the glue-on version of the HRT formula is proposed to be given by the minimum value of the signed area of spacelike extremal surfaces homologous to A. This can be written as where G is Newton's constant, the spacelike surfaces X A = X ∪X * lie on both the AdS 3 and AdS * 3 regions, and X A ∼ A denotes all the surfaces X A homologous to A. The Area[X A ] is the signed area of X A , which is the difference between the lengths of its AdS 3 and AdS * 3 parts such that 3
Area[X
(1.2) As shown in explicit examples, spacelike surfaces that extremize the signed area do not always exist.When there is no extremal surface for an interval A, we define the glue-on HRT formula as S[A] = 0.When extremal surfaces exist, the glue-on HRT formula is given by S , where γ A is the extremal surface that minimizes the signed area.The glue-on HRT surface γ A consists of two parts: a standard HRT surface γ that lies entirely in AdS 3 , and its extension γ * to the AdS * 3 region where it attaches to the endpoints of A. This is illustrated in fig. 1.
< l a t e x i t s h a 1 _ b a s e 6 4 = " J i Z l H h R F a H S v / I E M r W c a T w z h / M g = " > A A A C E X i c d V D L S s N A F J 3 4 r P V V 7 d L N Y B F c h a S t f e w q b l x W s A 9 o Q r m Z T t u h M 0 m Y m S g l 9 C v 8 A L f 6 C e 7 E r V / g F / g b J m 0 F K 3 p g 4 H D O v d w z x w s 5 U 9 q y P o y 1 9 Y 3 N r e 3 M T n Z 3 b / / g M H d 0 3 F Z B J A l t k Y A H s u u B o p z 5 t K W Z 5 r Q b S g r C 4 7 T j T a 5 S v 3 N H p W K B f 6 u n I X U F j H w 2 Z A R 0 I v V z e W c E Q k A / d g T o M Q G O L 2 f 9 X M E y 7 X K 9 V r K w Z R b r p V o l J d V K q V i t Y t u 0 5 i i g J Z r 9 3 K c z C E g k q K 8 J B 6 V 6 t h V q N w a p G e F 0 l n U i R U M g E x j R X k J 9 E F S 5 8 T z 8 D J 8 l y g A P A 5 k 8 X + O 5 + n M j B q H U V H j J Z B p R / f Z S 8 S + v F + l h z Y 2 Z H 0 a a + m R x a B h x r A O c N o E H T F K i + T Q h Q C R L s m I y B g l E J 3 2 t X B E R 1 0 w G 9 2 k z 3 9 / H / 5 N 2 0 b Q r 5 s V N u d C o L D v K o B N 0 i s 6 R j a q o g a 5 R E 7 U Q Q V P 0 i J 7 Q s / F g v B i v x t t i d M 1 Y 7 u T R C o z 3 L 8 k G n n c = < / l a t e x i t > The simplest glue-on HRT surface we consider is obtained when the interval A connects two antipodal points on a two-sphere.In this case, the bulk spacetime is the glue-on extension of the sphere foliation of Euclidean AdS 3 .The corresponding glue-on HRT surface consists of a straight line through the center of the space that connects two antipodal 3 The glue-on HRT proposal is reminiscent of the swing surface proposal for the entanglement entropy of warped CFTs and BMS-invariant quantum field theories put forward in [28,29].In analogy with that construction, there is a spacelike surface in the bulk that is attached to two segments that connect it to the endpoints of A. In contrast to swing surfaces, where the two segments are null and do not contribute to the area, the two segments of a glue-on surface are hyperbolic, and contribute to the area via (1.2).
points of the sphere.In this case, the signed area of the glue-on HRT surface matches the entanglement entropy S[A] of an interval A in a T T -deformed CFT with a positive deformation parameter [15] 4 This motivates our study of extremal surfaces in more general cases with the ultimate goal of understanding their relationship to the entanglement entropy of T T -deformed CFTs in more general scenarios.
In order to further understand the glue-on HRT proposal (1.1), we work out explicit examples in Poincaré AdS 3 , global AdS 3 , and BTZ black holes.We expect this novel geometrical quantity to be related to the entanglement entropy of T T -deformed CFTs, and our results are compatible with this expectation.We can show that the glue-on HRT formula in Poincaré AdS 3 has several nice features including non-negativity, monotonicity, concavity, purity, and the infinitesimal version of strong subadditivity.We also find a minimum length ℓ 2 min = 4cµ 3 below which extremal surfaces do not exist, so that the signed area vanishes by definition.For example, the glue-on HRT formula for a single interval A in Poincaré AdS 3 reads The appearance of a minimum length is compatible with our expectations from the T T deformation where the deformation parameter µ is related to a minimum length, see e.g.[30].
The single interval result (1.4) is a building block for multiple intervals.In this case, the signed area of the glue-on HRT surfaces undergoes a phase transition that is similar to that of a CFT 2 with a finite cutoff.The existence of a minimum length also plays an important role here, as it can lead to violations of subadditivity and strong subadditivity.The paper is organized as follows.In section 2 we review the T T deformation and glueon AdS 3 holography.In section 3 we consider the entanglement entropy of T T -deformed CFTs on a half interval on the sphere and show how this result can be reproduced from the signed area of a glue-on HRT surface.In section 4 we provide a general glue-on HRT formula.Therein we describe in detail the glue-on HRT surface for a single interval in Poincaré AdS 3 , discuss the emergence of a minimum length, and find the phase diagram for the signed area of surfaces associated with multiple intervals.In this section we also describe general properties of the glue-on HRT formula.In section 5 we construct glue-on HRT surfaces in global AdS 3 and BTZ spacetimes, and describe some of their properties.In appendix A we consider a more general prescription for the holographic entanglement entropy of a half interval on the sphere, and in appendix B we provide a general formula for the signed area of an extremal surface on arbitrary stationary solutions of Einstein gravity.
Glue-on AdS holography for T T -deformed CFTs
In this section we review the T T deformation and its holographic description in terms of glue-on AdS 3 spacetimes.
T T -deformed CFTs
The T T deformation of a two-dimensional QFT is a solvable irrelevant deformation driven by the stress tensor T ij such that the action I satisfies [4][5][6] where µ is the deformation parameter and the stress tensor is defined by where γ ij is the metric of the spacetime the QFT is defined on.The T T deformation preserves the translational symmetry of the undeformed theory such that Furthermore, when the undeformed theory is a CFT with central charge c, the trace of the stress tensor satisfies the trace-flow equation [7,12,31,32] where R (2) is the Ricci scalar of the background metric γ ij .The T T deformation enjoys a number of properties that make it attractive from a purely field theoretical point of view, see [33] for a review.In particular, the expectation value of the T T operator is finite, i.e. free of coincident-point singularities, and it factorizes into the product of expectation values of the stress tensor [4].The factorizability of the T T operator implies that the spectrum of T T -deformed CFTs on the cylinder is solvable and given by where R is the size of the cylinder, E(0) is the undeformed energy, and J(0) is the undeformed angular momentum.When the deformation parameter is negative, the argument of the square root becomes negative for large enough E(0).As a result, the spectrum of T T -deformed CFTs with µ < 0 becomes complex at high energies.In contrast, the spectrum of T T -deformed CFTs with µ > 0 is well defined at all energies provided that the deformation parameter is bounded by This bound for µ arises by requiring a real ground state energy.For positive µ satisfying (2.6), the torus partition function is well defined and shown to be modular invariant [34].Furthermore, modular invariance implies that the torus partition function is universal when the undeformed CFT has a large central charge and a sparse spectrum [35].
Glue-on AdS holography
Let us consider a two-dimensional CFT that is dual to three-dimensional Einstein gravity with a negative cosmological constant.The AdS/CFT dictionary [2,3] tells us that deforming the CFT by the T T operator changes the boundary conditions of the bulk metric g µν [7].In the absence of bulk matter fields, there is an alternative holographic description where the metric satisfies Dirichlet boundary conditions at a cutoff surface in the bulk.When µ < 0, this cutoff surface is located in the interior of the spacetime [12].On the other hand, when µ > 0 the cutoff surface is located in an auxiliary AdS * 3 region that is obtained from analytic continuation of AdS 3 and glued to its asymptotic boundary [1].This version of holography for T T -deformed CFTs is dubbed glue-on AdS 3 holography.
In order to describe the glue-on version of holography in more detail, let us consider the foliation of locally AdS 3 spacetimes by timelike surfaces N ζ with a constant radial function ζ(x µ ).The metric can be written in terms of the coordinates x µ = (ζ, x i ) as where n µ is the unit vector normal to N ζ and x i are the coordinates on N ζ .In this gauge, the asymptotic boundary of AdS 3 is located at ζ → 0 + and points with ζ > 0 are located in the interior of the AdS 3 spacetime.For the spacetime to be asymptotically AdS 3 , the normal-normal component of the metric must have a fixed leading order falloff so that [36] The T T deformation of a two-dimensional CFT is proposed to be holographically dual to Einstein gravity on a cutoff [12] or glue-on [1] AdS 3 spacetime defined by cutoff/glue-on AdS 3 : with the metric satisfying Dirichlet boundary conditions at the cutoff surface The location of the cutoff surface N c is related to the T T deformation parameter µ by (2.10) When µ < 0, the cutoff surface N c is moved towards the interior of AdS 3 and we obtain a cutoff AdS 3 spacetime.On the other hand, when µ > 0, the spacetime (2.9) is known as glue-on AdS 3 and it is obtained by analytic continuation of AdS 3 to negative values of ζ.
Let us denote the ζ < 0 region of (2.9) by AdS * 3 , which is still a locally AdS 3 geometry.The glue-on AdS 3 spacetime can be interpreted as gluing AdS * 3 to the original AdS 3 background along each of these spacetimes' asymptotic boundaries.The crucial difference between AdS 3 and AdS * 3 is the relative sign between the γ ij dx i dx j and ζ −1 γ ij dx i dx j line elements that stems from the different signs of ζ.This relative sign is telling us that the timelike coordinate x 0 of AdS 3 is spacelike in AdS * 3 , while the spacelike coordinate x 1 is timelike.Nevertheless, note that the metric the T T -deformed CFT couples to is identified with γ ij such that x 0 is timelike and x 1 is spacelike for either sign of µ.For more details on glue-on AdS3 holography, including evidence for the correspondence, see [1].
T T holographic entanglement entropy on the sphere
In this section we consider the holographic entanglement entropy of T T -deformed CFTs on the sphere.For µ < 0, the entanglement entropy of an interval connecting two antipodal points of the sphere can be computed non-perturbatively from the field theory side [15].This result can be reproduced holographically from the length of an HRT surface connecting two antipodal points of a sphere at a finite cutoff in the bulk.We will show that this result can be easily extended to the µ > 0 case.On the field theory side, the entanglement entropy is found to be well defined for spheres whose radii are greater than a minimum value set by µ.On the bulk side, we find that the entanglement entropy is given by the signed area of a glue-on HRT surface that connects two antipodal points on a cutoff surface in the AdS * 3 region of glue-on AdS 3 .
Field theory derivation
In this section we calculate the entanglement entropy of a half interval on the sphere in T T -deformed CFTs with either sign of µ.The sphere partition function with µ < 0 has been previously computed in [15,37], and the result has been refined and generalized for µ > 0 in [1].Here, we briefly review the derivation of the sphere partition function and then use it to calculate the entanglement entropy of a half interval.
Let us consider a T T -deformed CFT defined on a sphere of radius L with metric The conservation law (2.3) and the trace-flow equation (2.4) can be directly solved on the sphere, such that the stress tensor is given by [15] ⟨T Note that this expression is valid for both signs of µ.In particular, for positive µ, the stress tensor is real provided that the radius of the sphere is greater than a minimum value, The sphere partition function depends on both the deformation parameter µ and the radius L. The dependence of the partition function on µ can be determined from the definition of the T T deformation (2.1) and the flow equation (2.4), 2) . (3.4) On the other hand, a change in the radius of the sphere is equivalent to a scale transformation, the latter of which is generated by the trace of the stress tensor.As a result, the partition function satisfies the differential equation Given the solution of the stress tensor (3.2), the general solution to the differential equations (3.4) and (3.5) is then given by where a is an arbitrary integration constant with the dimension of length that is interpreted as the renormalization scale.In [1], the cutoff scale a is kept independent of the deformation parameter µ, and shown to agree with the bulk on-shell action.This differs from the choice made in [15] which only depends on µ.Nevertheless, the latter can be reproduced from (3.6) by choosing the cutoff scale for µ < 0 to be This choice is motivated by the cutoff AdS 3 proposal and the UV/IR relation of the AdS/CFT correspondence. 5For µ > 0, the choice of the cutoff scale (3.7) has a natural field theoretical interpretation as the minimum length L min required by the reality condition of the stress tensor (3.3).Using (3.7), the partition function reads which reproduces the result of [15] when µ < 0.
Let us now turn to the entanglement entropy.The simplest interval A on the sphere is the geodesic connecting two antipodal points such that its length is given by ℓ A = πL.The advantage of this choice is that the vacuum entanglement entropy S[A] can be easily computed using the replica trick on the n-sheeted sphere such that where is the T T partition function on (3.9) with Z µ = Z µ .As shown in [15], the partition function satisfies where the last equality follows from (3.5).As a result, the entanglement entropy is determined by the partition function of T T -deformed CFTs on the sphere via Using the expression for the partition function in (3.8), the entanglement entropy of the interval connecting two antipodal points on the sphere is thus given by Note that in the µ > 0 case, the entanglement entropy inherits the same range of validity as the partition function, being well defined only when ℓ A = πL ≥ πL min .In contrast, when µ < 0, we do not see the appearance of a minimum length.In the following section we will show that the entanglement entropy for µ > 0 matches a glue-on version of the HRT formula.
It is important to note that the entanglement entropy (3.13) does not contain an independent UV cutoff scale, which makes it impossible to reproduce the standard CFT 2 result in the undeformed limit µ → 0. The reason is that we have already identified the deformation parameter µ with the cutoff scale, as indicated in (3.7).In appendix A we discuss the renormalized version of the entanglement entropy with an arbitrary choice of a in the field theory side, and carry out a bulk computation to reproduce it.In the main text, we focus on the result (3.13).
Holographic entanglement entropy
Let us now turn to the bulk side of the glue-on AdS 3 /T T correspondence.In AdS 3 /CFT 2 , the geometry dual to the vacuum of a CFT on the sphere is given by the sphere foliation of Euclidean AdS 3 , which can be written as This is the same radial function used in (2.9) and related to the standard radial coordinate r by ζ = 1/r 2 .The glue-on version of this space is obtained by analytically continuing ζ to negative values such that glue-on AdS 3 : where the cutoff ζ c is related to the deformation parameter by (2.10), which is reproduced here for convenience The metric at the cutoff surface Depending on the sign of µ, the cutoff surface is located either in the interior of the AdS 3 (ζ > 0) or AdS * 3 (ζ < 0) parts of the spacetime.Note that the determinant of the metric in AdS * 3 is positive, although both the θ and ϕ coordinates are now timelike.Nevertheless, the line element of the T T -deformed theory is identified with ds 2 c (3.17), where both the θ and ϕ coordinates are spacelike.Furthermore, note that the signature of (3.15) changes when ζ < −1, so the radial coordinate in the AdS * 3 region is restricted to ζ ≥ −1.The holographic dictionary (3.16) then implies that ℓ 2 ≥ cµ/3, which reproduces the condition on the radius of the sphere found on the field theory side (3.3).
Let us consider an interval A connecting two antipodal points of the sphere at the cutoff ζ = ζ c < 0. We would like to find a geometric description of the entanglement entropy (3.13) in the glue-on AdS 3 geometry.A subtlety arises due to the minus sign in front of the sphere part of the metric (3.15).While the interval A is spacelike with respect to the boundary metric (3.17), it is timelike with respect to the bulk metric (3.15) because A resides in the AdS * 3 region.Nevertheless, it is possible to connect the endpoints ∂A of the interval A by an everywhere spacelike surface that goes through the AdS 3 part of the space.In fact, there is a natural way to extend the HRT surface from AdS 3 to AdS * 3 .In order to do so, we first note that the original AdS 3 space is a solid ball, and the HRT surface -which we denote by γ -is just a radial line that starts from θ = 0 at the north pole of the asymptotic boundary (ζ → 0 + ), switches to θ = π at the origin (ζ = ∞), and continues to the south pole at ζ → 0 + .We can extend the HRT surface to the AdS * 3 region of the glue-on AdS 3 space (3.15) by extending both ends of the radial line γ until they hit the cutoff surface at ζ = ζ c .By construction, the two radial half-lines in the AdS * 3 region, which are denoted by γ * , are spacelike geodesics.The glue-on HRT surface is defined as which connects the two antipodal points on the cutoff sphere at ζ = ζ c through the glue-on AdS 3 space (see fig. 2 for an illustration).In particular, note that the surface (3.18) is everywhere spacelike and piecewise geodesic.The next step is to assign a geometric invariant to the glue-on HRT surface.When µ < 0, the cutoff surface is located in the interior of AdS 3 and γ A is a segment of the HRT surface, whose area has been shown to reproduce the entanglement entropy of T T -deformed CFTs (3.13) [15].More explicitly, we have values, with the integrand taking the same form, namely where we have introduced a cutoff ϵ → 0 + at the asymptotic boundaries of AdS 3 (ζ = ϵ) and AdS * 3 (ζ = −ϵ).It is not difficult to check that the divergences from the two cutoff surfaces cancel such that the total integral is finite and independent of ϵ.The first term on the right hand side of (3.20) is just the area (length) of the original HRT surface γ ⊂ AdS 3 , while the second term is the area (length) of γ * ⊂ AdS * 3 multiplied by a minus sign.This motivates us to define the glue-on version of the HRT formula in terms of the signed area Integrating (3.20), it is not difficult to verify that the glue-on HRT formula (3.21) reproduces the entanglement entropy of T T -deformed CFTs on the sphere (3.13), namely The matching (3.22) suggests that the glue-on version of the HRT formula (3.21) can be interpreted as the holographic entanglement entropy of T T -deformed CFTs.In the following, we provide further support to this interpretation by showing that the glue-on HRT surface γ A can be regarded as the minimal surface of the signed area functional where X A is an everywhere spacelike surface in glue-on AdS 3 that is homologous to the interval A on the cutoff surface on AdS * 3 .The signed area of the surface X A = X ∪X * is the length of the segment lying in the AdS 3 region (X) minus the length of the segments lying in the AdS * 3 region (X * ).The extremality condition implies that the minimal surface has to be piecewise geodesic in both the AdS 3 and AdS * 3 regions of the space.We can then prove (3.23) by showing that the radial surface γ A is indeed extremal by considering infinitesimal variations of the gluing points at the asymptotic boundary.Consider a small deviation from γ A so that the variation of the tangent vector is parameterized by δθ ′ ≡ d δθ/dζ and δϕ ′ ≡ d δϕ/dζ.Then the correction to the signed area can be written as where we have ommitted higher order terms in δθ ′ and δϕ ′ .The integrand in (3.24) is always positive such that the value of the signed area always increases.As a result, the local minimum is given by the radial surface γ A which justifies the proposal (3.23).
We have shown that the glue-on HRT formula (3.21) reproduces the entanglement entropy of a half interval of T T -deformed CFTs on the sphere (3.13).Therefore, it can be identified with the holographic entanglement entropy of the T T -deformed theory.This provides further support for glue-on AdS holography and motivates the more general proposal described in the next section.
Glue-on HRT proposal
In this section we provide a formal definition of glue-on HRT surfaces and a general prescription for the glue-on HRT formula.A glue-on HRT surface is made of multiple segments that are glued together at the asymptotic boundary.We will show that the extremality condition implies that the vector tangent to the HRT surface must be continuous across the asymptotic boundary, and that its signed area is finite.As an example, we consider in detail the HRT surfaces associated with single and multiple intervals on Poincaré AdS 3 .This analysis reveals the emergence of a minimum length that depends on the T T deformation parameter, and below which no HRT surface exists.We also describe general features of the glue-on HRT formula including positivity, monotonicity, and subadditivity.
General prescription
The glue-on HRT formula is proposed to be given by where A is an interval at a cutoff surface in the AdS * 3 (ζ < 0) region of a glue-on AdS 3 spacetime.The surface X A is homologous to A and consists of multiple segments that are glued together at the asymptotic boundary.Let X ϵ denote the segment in the AdS 3 (ζ > ϵ) region of the spacetime, and X * ϵ denote the segment in the AdS * 3 (ζ < −ϵ) region, where ϵ → 0 + is a UV cutoff.The signed area of X A is then given by Area We conjecture that (4.1) is a quantity inherently associated with an interval A in T Tdeformed CFTs.In the previous section, we showed that (4.1) reproduces the entanglement entropy of a half interval on the sphere provided that the UV cutoff is identified with the deformation parameter.More generally, we expect (4.1) to be related to the entanglement entropy of T T -deformed CFTs in more general scenarios, although its precise relationship to the entanglement entropy is not addressed in this paper.
Let us now describe in more detail a few aspects of the proposal (4.1).
Finiteness.The two contributions to the signed area are divergent in the ϵ → 0 + limit, but their difference is finite.More explicitly, let us consider the region near the asymptotic boundary where the surface X A crosses from the AdS 3 region to the AdS * 3 region of the spacetime.The divergence in the area comes from the dζ 2 /4ζ 2 term in the metric of any asymptotically AdS 3 spacetime (2.8).The signed area of the surface X A in this region behaves as where p > ϵ and p * < −ϵ are two points in X and X * .We see that the divergences cancel so that the signed area is finite.
The extremal surface.The extremization necessarily requires the surface that minimizes the signed area (4.1) to be piecewise geodesic.Since X A is required to cross the asymptotic boundary, the signed area should also be extremal under variations of the gluing points.Let us consider a candidate surface X A with multiple segments glued at the asymptotic boundary at the points {p 1 , . . ., p m }.The extremality condition requires the surface γ A with the minimum signed area to satisfy When the boundary is a sphere, we have shown in the previous section that the minimal surface is the analytic extension to AdS * 3 of an HRT surface ending on the asymptotic boundary of AdS 3 .Generalizations of the HRT formula that consist of different segments, where the minimal surface is obtained by varying the location of the gluing points, have also been considered in other settings.These include the swing surface proposal for the holographic entanglement entropy of asymptotically flat spacetimes [29], as well as in de Sitter holography and timelike entanglement entropy [38][39][40].
Let us now consider in more detail the extremality condition for the case of a single interval with endpoints a and b parameterized by (a i , ζ c ) and (b i , ζ c ).Any candidate extremal surface of (4.1) necessarily consists of three spacelike geodesics such that where X â, b is an AdS 3 spacelike geodesic connecting the two gluing points (â i , ϵ) and ( bi , ϵ) at a regulating surface N ϵ near the asymptotic boundary of AdS 3 .On the other hand, X * a,â is a spacelike geodesic in AdS * 3 that connects the endpoint (a i , ζ c ) at the cutoff surface to the gluing point (â i , −ϵ) at a regulating surface N −ϵ , and similarly for X * b, b.
Let D(x µ 1 , x µ 2 ) denote the geodesic distance between any two points x µ 1 and x µ 2 such that the signed area of X A is given by Area The extremality condition can then be written as In addition, the fact that X A is a spacelike surface implies that the gluing points at the regulating surfaces N ±ϵ must satisfy the following spacelike condition: â must be spacelike separated from a while b must be spacelike separated from b.
The existence of the extremal surface requires that the points â and b satisfy both the spacelike condition and the extremality condition (4.7).As we will show later in explicit examples, when the endpoints a and b are too close to each other, i.e. when their distance on the cutoff surface is below a scale set by √ cµ, the extremality condition (4.7) cannot be satisfied in the regime where the spacelike condition holds.In this case, there is no extremal surface and we define S[A] = 0.This is motivated from the fact that, as we approach the minimum length scale from above, the signed area of the corresponding glueon HRT surface approaches zero.
Assuming that the extremal surface exists, let us discuss the implications of the extremality condition (4.7).First, note that the derivatives of the distance functions with respect to the radial coordinate of the gluing point are given by The minus signs are due to the fact that as |ζ| increases, the distance functions decrease in both the ζ > 0 and ζ < 0 regions.The gradient ∂ µ D(â, b) is normal to the equidistant surface between the points â and b.As a result, this covector is tangent to the geodesic connecting â and b, and points in the direction that increases the distance to the point b.In fact, Gauss's Lemma (see e.g.[41]) implies that ∂ µ D is precisely the unit tangent covector, so the normalization is fully determined.
We can parameterize the surface X A locally by ζ, so that it is described by x i = x i (ζ).In general, this parameterization leads to multi-valued functions. 6In the following, we focus on a single-valued branch which is always possible near the asymptotic boundary.Then the tangent vector ξ µ of X A is proportional to ( dx i dζ , 1).From the previous discussion, we find that the tangent covector is the gradient of the distance up to a sign, so that near the asymptotic boundaries of AdS 3 and AdS * 3 we have where the normalization is fixed by matching the radial components.The extremality condition (4.7) then implies that the tangent vector is continuous across the asymptotic boundary.More explicitly, since the glue-on construction guarantees that the two-dimensional metric γ ij is continuous across the asymptotic boundary, the derivatives dx i dζ should also be continuous, such that Note that (4.10) depends crucially on the asymptotic behavior of AdS 3 and AdS * 3 , as well as on the signed area.
Recall that the extremal surface is piecewise geodesic in both the AdS 3 and AdS * 3 regions of the spacetime.The condition (4.10) at the gluing point tells us that the geodesic in the AdS * 3 region can be constructed from that in AdS 3 by simply continuing the range of the radial coordinate ζ to negative values.More explicitly, let us parameterize the singlevalued branch of the HRT surface in AdS 3 that is attached to the point â at the asymptotic boundary by where ζ max is the turning point of the HRT surface, i.e. the maximum value of ζ.Then, its extension to the entire glue-on AdS 3 spacetime is given by such that γ a satisfies the continuity condition (4.10) and ends on the point a at the cutoff surface.Since the metric of AdS * 3 is obtained by a similar extension, the part of γ a lying in the AdS * 3 region is automatically geodesic.Similarly, we can continue the other singlevalued branch γ b of the HRT surface attached to b in AdS 3 into a surface γ b that attaches to the point b at the cutoff surface in AdS 3 .The full glue-on HRT surface is then given by The above argument can be generalized to multiple intervals in a straightforward way.As we are considering intervals in a two-dimensional cutoff surface, the boundary of any interval A consists of an even number of points, which can be grouped into pairs as {a (k) , b (k) } with k = 1, . . ., n.For a given grouping, the extremal surface will then be the union ∪ k γ (k) , where γ (k) is the glue-on HRT surface anchored at {a (k) , b (k) }.There are several ways of grouping the endpoints into pairs, and the final glue-on HRT surface is the one that minimizes the value of the signed area, In this section we illustrate in detail how the general prescription (4.1) works for single intervals in the glue-on version of Poincaré AdS 3 .In particular, we discuss the emergence of a minimum length that depends on the T T deformation parameter and below which the glue-on HRT surface ceases to exist.Let us consider the T T deformation of the vacuum of a CFT on the plane.The theory is proposed to be dual to the cutoff/glue-on version of Poincaré AdS 3 where the cutoff surface is located at ζ = ζ c = −cµ/3ℓ 2 .Depending on the sign of µ, the cutoff surface may be located inside AdS 3 (ζ > 0) or in the interior of the AdS * 3 spacetime (ζ < 0).In both cases, the background metric the deformed theory couples to can be read from the line element ds 2 c at the cutoff surface such that where ds 2 c = ℓ 2 dw + dw − .Since the coordinates w ± = x ± t are not compactified, the corresponding T T -deformed CFT is defined on the plane.
We now focus on the µ > 0 case and consider a spacelike interval A on the cutoff surface.In terms of the dimensionful coordinates (ℓw + , ℓw − ), the endpoints ∂A can be parametrized by such that the total length of the interval is ℓ A = √ ℓ + ℓ − .The requirement ℓ + ℓ − > 0 guarantees that A is spacelike with respect to the line element ds 2 c at the cutoff surface.We can obtain the glue-on HRT surface by extending a spacelike geodesic in AdS 3 across the asymptotic boundary towards the cutoff surface in the AdS * 3 region.This leads to The equations describing the glue-on HRT surface γ A (4.17) take the same form as those describing the HRT surface in pure AdS 3 except that ζ can now be negative.As illustrated in fig.3a, γ A is made of two parts that are glued at the asymptotic boundary, The first part of the glue-on HRT surface γ A is denoted by γ * and consists of two hyperbolic segments that lie on the AdS * 3 region of the spacetime.These segments are attached to the endpoints of the interval A at the cutoff surface and extend towards the asymptotic boundary at ζ = 0 − , where they attach to the endpoints ∂  of an auxiliary interval  parametrized by The auxiliary variables l± are related to the physical ones ℓ ± by Since the AdS * 3 spacetime is also Poincaré, any spacelike surface in AdS * 3 connecting the endpoints of A must necessarily cross to the AdS 3 region through the asymptotic boundary.The second part of the glue-on HRT surface, which is denoted by γ, lies in the ζ ≥ 0 region of the spacetime.It consists of a semicircle (the standard HRT surface) that is attached smoothly to the hyperbolic segments γ * at the endpoints ∂  at the asymptotic boundary.In order for the surface γ A to extend to the AdS 3 (ζ > 0) region of the spacetime, the auxiliary parameters l± in (4.20) must be real and nonvanishing.This requirement constraints A to be larger than a minimum value such that The emergence of a minimum length for the interval A is consistent with our expectations from T T , studies of which suggest that physically meaningful distances should be larger than the scale of nonlocality of the theory [30].The latter is proportional to the square root of the deformation parameter as in (4.21).
The extremality condition.We will now show that γ A is indeed the extremal surface of minimum signed area when ℓ A > ℓ min .For simplicity, let us consider an interval on the t = 0 slice of the cutoff surface such that ℓ + = ℓ − = ℓ A .We consider a two-parameter family of surfaces b that consist of three segments described by where a = −ℓ A /2ℓ and b = ℓ A /2ℓ.The X * a,â and X * b, b segments are required to be spacelike surfaces connected to the left and right endpoints of the interval, respectively.This leads to the following range for â and b where we have used the definition of ℓ min in (4.21).These bounds are saturated when the spacelike surfaces X * a,â and X * b, b approach the lightcone of the endpoints.Note that the surfaces X A exist for any ℓ A > 0 but are generically not extremal.It is straightforward to verify that (i) the three segments making up X A are all spacelike geodesics such that the surface X A is everywhere extremal except, generically, at the gluing points; and (ii) these segments are glued at the asymptotic boundary and anchored at ∂A, such that the piecewise geodesic X A is homologous to A.
The signed area of the X A surfaces can be written as in (4.6)where the distance functions are explicitly given by where ϵ → 0 + regulates the location of the asymptotic boundaries of AdS 3 and AdS * 3 .The distance function D(b, b) can be obtained from D(a, â) by letting (a, â) ↔ (b, b).As described earlier, the signed area is finite and independent of the regulator.The extremality condition (4.7) then implies that which are precisely the endpoints of the auxiliary interval ∂ Â defined in (4.19) and (4.20).The surface X A satisfying (4.26) is nothing but the extremal surface γ A .Furthermore, a second order variation around the extremal point (4.26) shows that it indeed corresponds to the minimum signed area, namely
.27)
When ℓ A < ℓ min , the solution (4.26) becomes imaginary and there is no real solution to the extremality condition (4.25), so the signed area Area[X A ] has no extremal point.
In addition, we note that for generic values of â and b, the surface X A is not smooth around these points.Indeed, the tangent vector along X A is discontinuous across the asymptotic boundaries of AdS 3 and AdS * 3 , namely between the ζ → 0 + and ζ → 0 − surfaces of the glue-on AdS 3 spacetime.Using (4.9), the discontinuity in the tangent vector is related to the variation of the signed area We have thus verified that the extremality condition (4.25) implies an identification of the first derivatives, namely (4.10).This provides the justification for the analytic continuation of the glue-on HRT surface (4.17): if we start from the continuity condition (4.10), and consider the geodesic equations on both sides of the asymptotic boundary, we end up with the unique analytic solution (4.17).
The glue-on HRT formula.Using (4.27) and the dictionary (2.10) we obtain where we have also included the µ < 0 result previously obtained in [16].By construction, the first line of (4.30) is only valid when the interval is larger than the minimum length, ℓ A > ℓ min .The limiting case where ℓ A = ℓ min corresponds to the case when the glue-on HRT surface γ A approaches a lightlike geodesic, and thus the signed area approaches zero.When ℓ A < ℓ min , there is no everywhere spacelike curve that extremizes the signed area, and hence the glue-on HRT surface ceases to exist.In this case, we have defined S[A] = 0.It is interesting to note that the peculiar behavior of S[A] observed above is also found in the entanglement entropy of the undeformed CFT when the size of the interval ℓ A becomes less than or equal to the UV cutoff ϵ.Indeed, we observe that when ℓ A = ϵ, the entanglement entropy S CFT [A] = (c/3) log(ℓ A /ϵ) vanishes, and that it becomes negative when ℓ A < ϵ.This is not surprising as the entanglement entropy of an interval whose size is smaller than the UV cutoff is not physically well defined.Consequently, our results are consistent with the fact that, although UV-complete, T T -deformed CFTs feature a minimum length that is proportional to the square root of the deformation parameter [30].This suggests that there is a close relationship between the glue-on HRT formula and the entanglement entropy of T T -deformed CFTs beyond the case of a half interval on the sphere considered in section 3.In particular, note that a minimum length of the interval has also been observed for the HRT surface in the single-trace version of the T T deformation with a positive deformation parameter [17].
Multiple intervals and phase transitions
The existence of a minimum length (4.21) leads to interesting consequences for the glue-on HRT surfaces of disjoint intervals.In order to illustrate this, let us consider two intervals A 1 and A 2 of sizes ℓ 1 and ℓ 2 , respectively, that are separated by some distance ℓ x on the same fixed-time slice (see fig. 5).The disjoint interval A 1 ∪A 2 has four endpoints which can be grouped into two pairs in two different ways.Assuming that the separation between the intervals is greater than the minimum length (4.21), we find that the glue-on HRT formula (4.13) reads where Sℓ A is defined for convenience by The first term in (4.31) comes from two disconnected glue-on HRT surfaces, as shown in fig.5a, while the second term comes from the connected contribution shown in fig.5b.show the two competing surfaces that are possible when the separation ℓ x between the intervals is greater than ℓ min .When ℓ x < ℓ min , there is no glue-on HRT surface associated with ℓ x and the disjoint intervals A 1 and A 2 are treated as one, as illustrated in (c).
When the sizes of the intervals are sufficiently close to ℓ min , it is possible to show that only the disconnected HRT surface dominates when ℓ x ≥ ℓ min .More precisely, we find that Sℓ 1 + Sℓ 2 ≤ Sℓ 1 +ℓ 2 +ℓx + Sℓx for any ℓ x provided that Interestingly, a similar result can be obtained from the holographic entanglement entropy in AdS 3 /CFT 2 if we identify ℓ min with the size of the UV cutoff ϵ.
Another novelty arises when ℓ x < ℓ min .In this case, the smaller HRT surface in fig.5b ceases to exist.This means that although the intervals A 1 and A 2 are separated by a distance ℓ x , they may be effectively treated as a single joint interval.The HRT surfaces of the single and multiple intervals are allowed to compete and the transition point is determined dynamically, by minimizing between Sℓ 1 + Sℓ 2 and Sℓ 1 +ℓ 2 +ℓx such that The transitions of S[A 1 ∪ A 2 ] are illustrated in fig.6.
It is interesting to note that the behavior (4.34) of the glue-on HRT formula is similar to the holographic entanglement entropy of the undeformed CFT when ℓ x is smaller than the UV cutoff ϵ of the theory.In this case, any separation ℓ x smaller than ϵ is unphysical, so the disjoint intervals A 1 and A 2 can behave as a single one.Relatedly, since the glue-on HRT surface cannot resolve subregions of size ℓ A < ℓ min , the extremal surface associated with a region with multiple holes of size ℓ x < ℓ min cannot be distinguished from that of a region without any holes.This is compatible with the interpretation that ℓ min corresponds to a minimum distance in the dual field theory.As shown in (a), for sufficiently small ℓ 1 and ℓ 2 , the "bridge" HRT surface (fig.5b), whose entropy is given by the blue curves above, never dominates.
To summarize, we have found that the glue-on HRT formula for two disjoint intervals in T T -deformed CFTs with µ > 0 is given by where Sℓ i is defined in (4.32) and ℓ min = 2 cµ/3.
Features of the glue-on HRT formula in Poincaré AdS 3
Let us now describe some interesting properties of the glue-on HRT formula for a single interval on Poincaré AdS 3 (4.30)with µ > 0: • Positivity.Although the signed area contains a minus sign, the glue-on HRT formula always yields non-negative results in Poincaré AdS 3 .This can be verified directly from the HRT formula for a single interval (4.30).The multi-interval result is also non-negative since it is built from the single-interval HRT formula and it does not introduce any additional minus signs.Furthermore, note that other locally AdS 3 spacetimes can be obtained from Poincaré AdS 3 by a coordinate transformation which leaves the geodesic distance invariant.This suggests that positivity of the glueon HRT formula also holds for other glue-on AdS 3 spacetimes in Einstein gravity.However, note that in other spacetimes, there might be other extremal surfaces that cannot be obtained from a coordinate transformation and are the result of a nontrivial topology.We will come back to this point later when we consider glue-on versions of global AdS 3 and the BTZ black hole.
• Purity.An interval A shares the same endpoints with its complement A c .Since Poincaré AdS 3 has a trivial topology, the glue-on version of the HRT surface is the same for both A and A c , such that This is similar to the fact that the entanglement entropy of an interval on a pure state is the same as that of its complement.For this reason we refer to this property as purity.
• C-function.In analogy with the Casini-Huerta C-function [42], we can define For Poincaré AdS 3 we then have which is positive for all µ provided that ℓ A > ℓ min .This agrees with the C-function obtained from the holographic entanglement entropy for T T deformed CFTs in the µ < 0 case computed in [16].
• Monotonicity.Positivity of the C-function guarantees that S[A] is monotonic as long as the length of the interval is larger than or equal to ℓ min , namely, where ℓ 1 and ℓ 2 are the lengths of the intervals A 1 and A 2 , respectively.
• Concavity.It is not difficult to check that S[A] is also concave in the range ℓ A > ℓ min , namely • Subadditivity.Consider two adjacent intervals A 1 and A 2 of lengths ℓ 1 and ℓ 2 .In general, subadditivity is violated when the following function is negative When either ℓ 1 ≤ ℓ min or ℓ 2 ≤ ℓ min , then I 2 < 0 and subadditivity is violated.This follows from the monotonicity of S[A] and the fact that either S[A 1 ] or S[A 2 ] vanish.On the other hand, when ℓ 1 , ℓ 2 > ℓ min , we have where Sℓ i is the function defined in (4.32).We note that given ℓ 2 , the function I 2 is a monotonic function of ℓ 1 , and for ℓ 1 sufficiently close to the minimum length, I 2 is always negative.When ℓ 1 = ℓ 2 , we find that the zero of I 2 is located at This leads to a sufficient condition for subadditivity to be satisfied, namely, and similarly, it leads to a sufficient condition for subadditivity to be violated, If we fix ℓ 2 > 1 2 (1 + √ 3) ℓ min , then I 2 < 0 as ℓ 1 → ℓ min , and I 2 becomes positive if ℓ 1 is larger than some critical value smaller than 1 2 (1 + √ 3) ℓ min .
We have seen that for a half interval on the sphere, S[A] reproduces the entanglement entropy of T T -deformed CFTs.If we extend this interpretation to the present case, the violation of subadditivity would suggest the possibility that the Hilbert space cannot be factorized as the product of local degrees of freedom, since means that the union A 1 ∪ A 2 somehow contains more entanglement with its environment than the sum of the individual subsystems A i .Similar violations of subadditivity have been observed in other interesting examples such as [43,44], and have been interpreted as a result of non-locality in the dual field theories.
• Strong subadditivity (SSA).Strong subadditivity is violated whenever the following function is negative Let us first consider three adjacent intervals A 1 , A 2 , and A 3 with unconstrained lengths ℓ 1 , ℓ 2 , and ℓ 3 .It is not difficult to find special cases where I 3 < 0. This occurs, for instance, when ℓ 1 + ℓ 2 < ℓ min and ℓ 1 + ℓ 2 + ℓ 3 > ℓ min .On the other hand, when the lengths of the intervals are larger than ℓ min , we can use the single interval expression Sℓ i for each of the four terms in (4.45) such that It is straightforward to verify that all the partial derivatives ∂ ℓ i I 3 are positive for i = 1, 2, 3. We can also find a critical value l of order ℓ min so that I 3 = 0 when the lengths of the intervals are equal to l.Then, strong subadditivity is satisfied as long as ℓ i > l for all i, and it is violated whenever ℓ i < l for all i.
In our discussions, the violation of strong subadditivity for µ > 0 can be observed for intervals lying on a constant time slice.For µ < 0, similar violations have been observed in [16], but only when the intervals are boosted, such that they do not lie on the same time slice.
• Infinitesimal version of SSA.Consider now three adjacent intervals A 1 , A 2 , and A 3 with lengths Unlike the generic version of strong subadditivity, the infinitesimal one is guaranteed by concavity (4.40) as ϵ → 0, namely These properties of the glue-on HRT formula are reminiscent of the entanglement entropy of an interval A in a quantum field theory.This is not surprising, given that in the limit √ µ ∝ ϵ → 0, (4.30) reduces to the standard HRT formula of the AdS 3 /CFT 2 correspondence.For finite values of µ, we have seen that the glue-on HRT surface on a half interval on the sphere reproduces the entanglement entropy of T T deformed CFTs with a finite UV cutoff determined by µ.All of these results suggest a strong connection between the glue-on HRT formula and the entanglement entropy of T T -deformed CFTs.
HRT surfaces in glue-on AdS 3 on the cylinder
In this section we construct glue-on HRT surfaces for spacelike intervals on the cylinder following the general prescription proposed in the previous section.In particular, we study HRT surfaces on both the vacuum and the nonrotating BTZ black hole.The fact that the spatial circle is compact in these cases leads to a novel interplay between the HRT surfaces and the minimum length of the interval.In particular, we will show that the topology of the glue-on BTZ background leads to a novel phase diagram for S[A] as the size of the interval is changed.A general formula that is valid for small intervals on arbitrary stationary solutions of Einstein gravity is given in appendix B.
Global AdS 3
In this section we construct the HRT surface associated with an interval at a cutoff surface on a fixed-time slice of the glue-on version of global AdS 3 .The signed area of this surface is expected to be related to the entanglement entropy of T T -deformed CFTs with µ > 0 on the vacuum.In particular, we will see that there is a minimum length of the interval below which the glue-on HRT surface ceases to exist, in agreement with the results of previous sections.
We begin by describing the glue-on version of global AdS 3 .The metric can be written in the gauge introduced in section 2.2 as where ζ c < 0 such that the cutoff surface is located in the AdS * 3 region of the spacetime.Note that the signature of the spacetime changes at ζ = −1 where the spacelike and timelike nature of the ζ and t coordinates is exchanged.In analogy with the sphere foliation of AdS 3 considered in section 3.2, we restrict the range of the radial coordinate in (5.1) to ζ ≥ −1.Using the holographic dictionary (2.10), we see that this range of ζ reproduces the bound on the deformation parameter of T T -deformed CFTs on a cylinder of radius R = ℓ (2.6).
Let us now consider an interval A on the cutoff surface ζ = ζ c .For convenience, we assume that the interval lies on a fixed-time slice so that it can be parametrized by its angular coordinate by The total length of the interval is then ℓ A = 2ℓφ A .We are interested in finding the extremal spacelike surface γ A with the smallest signed area that is anchored to the interval (5.2) at the cutoff surface.Following the discussion in the previous section, we can construct the glue-on HRT surface in the following way.We first consider a standard HRT surface in the AdS 3 region of the glue-on AdS 3 spacetime.In the present case, it is more convenient to parametrize the HRT surface in terms of the angular, instead of the radial, coordinate.The function describing the HRT surface is then single valued.In order to obtain the glue-on HRT surface we continue the value of the angular coordinate such that the HRT surface crosses to the AdS * 3 region of the spacetime.The glue-on HRT surface can be shown to be given by (see fig. 7a for an illustration) The glue-on HRT surface (5.3) consists of two segments in AdS * 3 and an HRT surface in AdS 3 .The latter is anchored to the following auxiliary interval at the asymptotic boundary Decreasing the size of the interval A at the cutoff surface, decreases the size of the interval  at the asymptotic boundary.In particular, the HRT surface (5.3) does not exist when the size of the interval  shrinks to zero.At this point, the AdS 3 part of γ A disappears, and the two segments in AdS * 3 become lightlike.As a result, we find that in this case there is also a minimum length of the interval for which the glue-on HRT surface is guaranteed to exist that is given by (5.5) In particular, note that for small values of µ we have ℓ 2 min = 4cµ/3 + O(µ 2 ), which reduces to the result obtained for Poincaré AdS 3 in section 4.2.On the other hand, when µ < 0, the spacelike geodesic γ A lies on the AdS 3 part of the spacetime and is described by (5.3) with φ < φ Â.In this case, γ A corresponds to an HRT surface attached to the cutoff surface ζ = ζ c > 0 and there is no minimum length of the interval.
The signed area of the glue-on HRT surface (5.3) can be written as where ϵ → 0 + regulates the divergence of the integral near the asymptotic boundary at φ = φ Â.The first term in (5.6) is the area of the HRT surface attached to the asymptotic boundary of AdS 3 , while the second term corresponds to the area of the AdS * 3 part of (5.3).Evaluating the integral, we find that the area of γ A is finite, independent of the cutoff ϵ, and given by Area Using the holographic dictionary (2.10), together with the relationship between φ A and φ Â in (5.4), the glue-on HRT formula yields where we have also included the result for µ < 0 for completeness.
Let us now comment on a few features of (5.9).First, when µ > 0, the right hand side of (5.9) is valid only for ℓ A > ℓ min .As discussed earlier, the glue-on HRT surface does not exist for ℓ A ≤ ℓ min , in which case we define S[A] = 0. Altogether, the single interval result is still non-negative, which implies that S is non-negative for all intervals in global AdS 3 .This is the same behavior observed for Poincaré AdS 3 in section 4.2.Since the spatial circle is contractible in the AdS 3 region of the spacetime, an interval with size ℓ A > ℓ min shares the same HRT surface of its complement with size ℓ A c = 2πℓ − ℓ min .As a result, we have S[A] = S[A c ], which is analogous to the purity condition described in section 4.4.The existence of a minimum length then implies the existence of a maximal length ℓ max = 2πℓ − ℓ min beyond which the glue-on HRT surface ceases to exist.Finally, note that when the deformation parameter saturates the bound (2.6), i.e. when it takes the critical value µ c = 3ℓ 2 /c, the minimum length of the interval (5.5) becomes half of the size of the system, namely ℓ min µ=µc = πℓ = ℓ max . (5.10) Consequently, there are no glue-on HRT surfaces for any interval when µ reaches its critical value.These features of the glue-on HRT formula on global AdS 3 are reminiscent of the behavior of the entanglement entropy on a pure state and it would be interesting to explore further the relationship between (5.9) and the vacuum entanglement entropy of T T -deformed CFTs.
BTZ black holes
In this section we study the consequences of adding temperature to the glue-on HRT formula.In this case, the HRT formula is given by the signed area of an extremal surface attached to the endpoints of an interval on a cutoff surface in the glue-on version of the BTZ black hole.We will show that, in analogy with the holographic entanglement entropy of two-dimensional CFTs, S[A] of an interval A differs from that of its complement A c , a result that is related to the fact that BTZ is not a pure state.In particular, we will show that the signed area can lead to situations where the extremal surface homologous to A always dominates as we increase the size of the interval.Let us consider the glue-on version of the nonrotating BTZ black hole.The metric can be written in the coordinates used in (2.9) as where ζ = ζ c ≥ −1 is the location of the cutoff surface, the horizon is located at ζ −1 = r 2 + , and r + = 2πℓ/β t where β t is the inverse temperature of the black hole.Due to the (1−r 2 + ζ) factor in the g tt component of the metric, the inverse temperature β for the T T -deformed CFT living at the cutoff surface is related to β t via [1] 12) The location of the cutoff surface ζ c is related to the T T deformation parameter via the holographic dictionary (2.10).
We now consider an interval A on the cutoff surface ζ = ζ c on the BTZ * (ζ < 0) region of the spacetime at a fixed-time slice.The interval is parametrized by the angular coordinate as in (5.2) and the extremal surface X A homologous to A is given by (5.13) The extremal surface X A consists of two parts lying in the BTZ and BTZ * regions of the glue-on spacetime.As illustrated in fig.8a, the BTZ part of X A consists of a standard HRT surface that lies outside of the horizon of the BTZ black hole and attaches to an auxiliary interval  at the asymptotic boundary.The interval  can be parametrized by As a result, there is a minimum length of the interval A that is necessary for the existence of the glue-on HRT surface such that where the last inequality follows from the bound on the deformation parameter (2.6).Notably, in this case the minimum length depends on the size of the black hole, decreasing as r + is increased.In particular, from (5.15) we learn that ℓ min < 2πℓ − ℓ min .It is not difficult to verify that the extremal surface (5.13), the interval (5.14), and the minimum length (5.15) reduce to the corresponding quantities on global AdS 3 after the analytic continuation r + = i that turns the nonrotating BTZ black hole into the global AdS 3 spacetime.
The signed area of the extremal surface (5.13) can be written as where ϵ → 0 + is the UV cutoff and the absolute value in the last line guarantees that the expression is valid for cutoff surfaces that lie either in the BTZ or the BTZ * regions of the spacetime.
Due to the existence of a black hole horizon, the homologous condition for an interval A differs from that of its complement A c , and we expect γ A to be different from γ A c .This is related to the thermal nature of the dual state on the field theory side, and can be seen directly from (5.16), as the expression is not invariant when φ A is exchanged by its complement π − φ A .If the interval is small enough such that ℓ A < ℓ min , then there is no extremal surface homologous to A, and hence S[A] = 0. On the other hand, when ℓ A > ℓ min , there are two extremal surfaces homologous to A as illustrated in fig.8.One surface, X A , is connected and homotopic to A. The other surface is disconnected and consists of the union of the BTZ horizon and X A c , the latter of which is homotopic to the complement A c .As ℓ A approaches 2πℓ − ℓ min from below, the extremal surface X A c shrinks towards the BTZ * region of the spacetime and ceases to exist once this value is reached.As a result, in the case ℓ A ≥ 2πℓ − ℓ min , the disconnected surface consists only of the BTZ horizon.where l is determined by equating the first and second lines.
• When r + < r 2 , a phase transition occurs at a critical length ℓ min < l < 2πℓ − ℓ min and we have where Sdis is given in (5.19), which is greater than the black hole entropy for ℓ A < 2πℓ − ℓ min , as shown in fig.9c.
Finally, we note that when µ < 0, the glue-on HRT surface reduces to a standard HRT surface with a finite cutoff.In this case there is no minimum length of the interval and the phase diagram is similar to that of CFT 2 in a thermal state.In this case, we have
A Renormalized entropy from endpoints
In this appendix we revisit the derivation of the holographic entanglement entropy of a half interval in T T -deformed CFTs on the sphere.As described in section 3.1, the sphere partition function of T T -deformed CFTs Z µ (a) depends on an integration constant a with the dimension of length that is to the renormalization scale of the theory.The choice a = c|µ|/3 leads to an exact match between the field theory calculation (3.13) and the holographic result (3.21), and furthermore reproduces the result of [15] when µ < 0. The relationship between the length scale a and the deformation parameter µ is a natural one from the point of view of cutoff/glue-on AdS holography and the UV/IR relation of the AdS/CFT correspondence.This follows from the fact that changing the location of the asymptotic boundary of the bulk spacetime, which is determined by µ, is interpreted as changing the UV cutoff a of the dual field theory, such that there is a single scale specified by µ.
On the other hand, it would also be interesting to study the T T deformation with an independent UV length scale, which is natural from the field theory perspective.In this case the integration constant a is decoupled from the deformation parameter µ.Using the general formula (3.6) for the partition function Z µ (a), we find that for generic a, the entanglement entropy is given by [37] We see that the integration constant a now enters the entanglement entropy.This can be understood as a renormalized quantity, where the UV cutoff is always tuned to the length scale a. Unlike the previous result (3.13), the entropy S a [A] is analytic in µ, and it admits a direct µ → 0 limit, where S a [A] → (c/3) log (2L/a) and a is simply identified with the UV cutoff of the original CFT.It would then be interesting to identify the holographic prescription for the renormalized entropy (A.1).In order to incorporate an independent boundary radius L, we replace ζ → (ℓ 2 /L 2 ) ζ in the background (3.15).Under this rescaling, the metric is given by: The two scales L and ℓ are decoupled here, unlike in previous sections.Consider the partition function Z (n) of the n-cover of the sphere, which is smooth in the bulk but has conical singularities with angle 2πn at the endpoints.The holographic entanglement entropy is given by the bulk extension of the replica trick (3.10).As shown in [45], the theory.In particular, it is equally valid to consider In summary, we have shown that the renormalized entropy on the sphere (A.1) can be obtained from a generalized HRT prescription, where additional endpoint contributions (A.7) are included besides the area terms.This is the entanglement entropy that is compatible with the partition function (3.6) that satisfies the flow equation (3.4).Our prescription is limited to the T T deformation on the sphere, where we have a concrete result for the replica partition functions for the bulk and the boundary.It would be interesting to consider the endpoint contributions for general backgrounds, which we leave for future study.
B General formula for a small interval on the cylinder
In section 5 we considered the glue-on HRT proposal for spacetimes with a compact spatial coordinate but zero angular momentum.The methods developed in the main text can be readily applied to more general spacetimes with angular momentum, and we shall summarize the main results in this appendix.
According to the glue-on AdS 3 proposal described in section 2, a T T -deformed CFT with left and right-moving temperatures T L,R can be thought of as living on a cutoff surface in a glue-on version of the rotating BTZ black hole whose metric is given by The cutoff surface ρ = ρ c in these coordinates is related to the deformation parameter by
Figure 1 :
Figure 1: A glue-on AdS 3 spacetime consists of two locally AdS 3 spacetimes, denoted by AdS 3 and AdS * 3 , glued along their asymptotic boundaries at ζ = 0.The glue-on HRT surface γ A (blue) associated with the interval A on the cutoff surface (red) at ζ = ζ c consists of a standard HRT surface in AdS 3 and two hyperbolic segments in AdS * 3 .
Figure 2 :
Figure 2: A cross section of the glue-on AdS 3 space that shows the extended HRT surface (blue) connecting the north and south poles of the sphere at a cutoff surface (red) on AdS * 3 .The dashed line denotes the asymptotic boundary of the AdS 3 and AdS * 3 regions.
Figure 3 :
Figure 3: Fixed-time slices of the glue-on Poincaré spacetime showing two HRT surfaces (blue) associated with an interval A at a finite cutoff (red) in AdS * 3 .When the length of the interval ℓ A equals ℓ min , the AdS 3 part of the HRT surface shrinks to a point and the AdS * 3 part becomes lightlike.
3 < l a t e x i t s h a 1 _
b a s e 6 4 = " J i Z l H h R F a H S v / I E M r W c a T w z h / M g = " > A A A C E X i c d V D L S s N A F J 3 4 r P V V 7 d L N Y B F c h a S t f e w q b l x W s A 9 o Q r m Z T t u h M 0 m Y m S g l 9 C v 8 A L f 6 C e 7 Er V / g F / g b J m 0 F K 3 p g 4 H D O v d w z x w s 5 U 9 q y P o y 1 9 Y 3 N r e 3 M T n Z 3 b / / g M H d 0 3 F Z B J A l t k Y A H s u u B o p z 5 t K W Z 5 r Q b S g r C 4 7 T j T a 5 S v 3 N H
Figure 4 :
Figure 4: A non-extremal surface X A (orange) and the extremal surface γ A (blue).Both surfaces are piecewise geodesic and anchored at the endpoints of the interval A on the cutoff surface (red) at ζ = ζ c .
Figure 5 :
Figure5: The glue-on HRT surfaces (blue) associated with two intervals A 1 and A 2 on a fixed-time slice at a cutoff (red) in the glue-on version of Poincaré AdS 3 .Cases (a) and (b) show the two competing surfaces that are possible when the separation ℓ x between the intervals is greater than ℓ min .When ℓ x < ℓ min , there is no glue-on HRT surface associated with ℓ x and the disjoint intervals A 1 and A 2 are treated as one, as illustrated in (c).
Figure 6 :
Figure6: S[A 1 ∪ A 2 ] of two intervals A 1 and A 2 of sizes ℓ 1 and ℓ 2 , as a function of the distance ℓ x in between.When ℓ x ≲ ℓ min the dominant phase is always the single interval configuration, as shown fig.5c.As shown in (a), for sufficiently small ℓ 1 and ℓ 2 , the "bridge" HRT surface (fig.5b),whose entropy is given by the blue curves above, never dominates.
N 8 k u 0 k m s e U j e i A t w 0 N a c C 1 l 8 7 O n e B T o / R w P 1 K m Q s A z 9 e d E S g O t x 4 F v O g M K Q / 3 b m 4 p / e e 0 E + h d e K s I 4 A R 6 y + a J + I j F E e P o 7 7 g n F G 8 k u 0 k m s e U j e i A t w 0 N a c C 1 l 8 7 O n e B T o / R w P 1 K m Q s A z 9 e d E S g O t x 4 F v O g M K Q / 3 b m 4 p / e e 0 E + h d e K s I 4 A R 6 y + a J + I j F E e P o 7 7 g n F G c i x I Z Q p Y W 7 F b E g V Z W A S W t o S J B K E i u 4 m J p n v 9 / H / p F G w H d c + v y 7 l q + 4 i o w w 6 R i f o D D m o j K r o C t V Q H T E 0 Q o / o C T 1 b D 9 a L 9 W q 9 z V t X r M X M E V q C 9 f 4 F A S W Z N g = = < / l a t e x i t > ⇣ c 0 (b) ℓA = ℓmin
Figure 7 :
Figure 7: Fixed-time slices of the glue-on version of global AdS 3 and the HRT surfaces γ A (blue) associated with intervals A at the cutoff surface (red) in the AdS * 3 region.When the length of the interval ℓ A equals ℓ min , the AdS 3 part of the candidate HRT surface shrinks to a point and the AdS * 3 part becomes lightlike.
4 c 5 6 d d + f D + Z y t F p z 8 5 h D M w f n 6 B e S r m r Y = < / l a t e x i t > A c BTZ < l a t e x i t s h a 1 _ b a s e 6 4 = " 7 / N 8 e + Q D 2 t H J 8 m P
Figure 8 :•S
Figure 8: Extremal surfaces associated with an interval A at the cutoff surface (red) on a fixed-time slice of a glue-on BTZ spacetime.The connected surface (a) is homologous to the interval A, while the disconnected one (b) consists of the union of the circle around the black hole horizon (black disk) and the surface homologous to the complement A c .Using(5.16)and (5.14), the contribution of (5.13) to the glue-on HRT formula for a generic value of ℓ A > ℓ min is given bySℓ A ≡ c 3 arccosh 1 r + 3ℓ 2 cµ sinh r + ℓ A 2ℓ = c 3 arccosh sinh r + ℓ A 2ℓ sinh r + ℓ min 2ℓ .(5.17)In terms of this quantity, the contribution of the connected and disconnected surfaces to the glue-on HRT formula read connected: Scon = 0, ℓ A < ℓ min , Sℓ A , ℓ A ≥ ℓ min ,(5.18)disconnected: Sdis =
a 2 ζ c = − ℓ 32πG log ℓ 2 a 2 ζ c Nc d 2 x 2 a 2 2 a 2
√ γ R[γ].(A.9)This leads to the following correction to the entanglement entropy SW ℓ ζ c = ∂ n I resulting holographic entanglement entropy reproduces the field theory result with an arbitrary length scale a S[A] + SW ℓ ζ c = S a [A].(A.11)
dρ 2 4ρ 2 +
du + ρ T 2 v dv dv + ρ T 2 u du ρ , ρ ≥ ρ c .(B.1)where ρ takes negative values in the BTZ * region.The parameters T u,v are given in terms of the inverse temperatures β u,v of the BTZ black hole by T u,v = π/β u,v .In analogy with the nonrotating case, they are related to the temperatures of the T T -deformed CFT by[1] .14) 4.2 Single interval in Poincaré AdS 3 the length of the interval for which the two lines are equal to each other.Joint Workshop on Fields and Strings 2022", where part of this work was done.WXL and WS thank the Yukawa Institute for Theoretical Physics (YITP) for hospitality during the "YIPQS long-term workshop on Quantum Information, Quantum Matter and Quantum Gravity" (YITP-T-23-01), where part of this work was completed. | 19,545.4 | 2023-11-08T00:00:00.000 | [
"Physics"
] |
The Implementation of a Deep Neural Network (DNN) Approach in a Case Study Predicting the Distribution of Carbon Dioxide (CO2) Gas Saturation
Predicting the distribution of CO2 gas saturation is one example of how multiphase flow might be evaluated in Carbon Capture and Storage (CCS). The TOUGH2 simulator is one of the numerical simulations commonly used for multiphase flow simulation. Ordinary numerical simulations have several issues, including high grid spatial resolution and high processing costs. One of the most effective deep learning approaches to predicting the distribution of CO2 gas saturation is the deep neural network (DNN). A deep neural network is a network with three interconnected layers, there are input, hidden, and output layers. DNN learns about the previously constructed architecture from the input data. DNN requires a large quantity of data as input. Thus, in this study, we use 700 data points for each of the train_a and train_b variables. The distribution of CO2 gas saturation will be predicted automatically by the trained DNN model. This technique can handle complex data patterns, such as gas saturation in multiphase flow problems. The reconstruction loss findings show that the loss value decreases as the number of epochs increases. Furthermore, we used 3 and 4 epochs to determine the difference in results between the two. As a result, the model with 4 epochs and 10−3 regularization weight obtained the lowest error value of 0.4305. In summary, this model is capable of predicting CO2 gas saturation distribution, but more research is needed to produce more optimal results. This research hopes to help monitor multiphase flow in CCS systems in the future by forecasting the distribution of CO2 gas saturation.
Introduction
Carbon Capture and Storage (CCS) is a technology potentially reduce to reduce carbon emissions by up to 85% by 2050 [1].The way the CCS system works is that when it is finished being injected, the CO2 in the permeable storage reservoir will be controlled by several conditions (fluid pressure, temperature, composition, and stress field) as well as rock properties (porosity, permeability, density) [2].Above the permeable layer there is a seal (caprock) so that CO2 cannot move directly upwards.Then, CO2 will subsequently spread and steadily migrate upslope.Migration will proceed until it reaches a trap on the outermost layer, where CO2 will be collected.Therefore, multiphase flow is one of the things that needs to be analyzed in this system [3].The reason for this is that multiphase flow can be used to tackle subsurface flow issues [4].1307 (2024) 012026 IOP Publishing doi:10.1088/1755-1315/1307/1/012026 2 Subsurface geological heterogeneity can produce variations in permeability and capillary pressure [5].By analysing how well the fluid is able to move through the system, effective permeability is critical in multiphase systems in order to precise simulation and forecasting of the flow of fluids.[6].The low permeability value is caused by the existence of a low gas saturation level [7].
Multiphase flows are often simulated using numerical simulations [8].The most frequently used numerical simulation is the TOUGH2 simulator.Conventional computational simulations, have a number of restrictions, including higher grid spatial resolution [9], [7] along with expensive processing costs [10] One of the algorithms used for dealing with the inadequacies of conventional computational simulation is Deep Neural Network (DNN), a three-layer artificial neural network consisting of an input layer, a hidden layer, and an output layer [11].Each connection between neurons has a weighting in the form of random numbers, which is changed for each connection in each iteration.Weight figures are adjusted by providing feedback to the input node.There is also a bias at each layer to assist the machine in generalizing the learning process [12].In Figure 1, there is an activation function in a neural network.The activation function is a mechanism that sums all input signals and determines whether the sum has reached the threshold limit or not, so that later it will trigger the appearance of an output signal [13].The advantages of using the DNN technique, according to previous research, is that DNN results for image and voice recognition can be quite accurate [14], [15].Moreover, the DNN technique has the advantage of producing CO2 migration projections with comparable precision compared to conventional numerical models [16].
Figure 1. Perceptron concept
The reconstruction loss function must be utilized when developing the DNN model to demonstrate how well the network can repair data.The gap among the predicted and true numbers is calculated using common loss function or the Mean Square Error (MSE).The lower the numbers, the more capable the model is in reconstructing the input data, and the procedure seems to be equivalent to using the function of objective in the reverse issue.This algorithm has been chosen because greater carbon dioxide gas saturation in multiphase flows is usually linked with increased movement, necessitating exceptional accuracy in plume prediction [17].
The 3D temporal model is meant to obtain temporal data from the model to be predicted.The aforementioned model includes a temporal layer that is capable of simulating the depth of the temporal convolutional kernel, making it ideal for acquiring temporal data in the immediate, medium, and distant futures.The design is made up of three major components: an encoder, a processor, and a decoder [18].Therefore, the goal of this study is to accurately predict how CO2 gas saturation distributes using the Deep Neural Network (DNN) approach.As a result, it is expected that our methods will be able to address the increasing need to evaluate the storage of carbon dioxide.
Methodology
Anaconda (with Python 3.6 environment), Jupyter Notebook, and Microsoft Office are all used in this study.This study was conducted using computer hardware that met the following requirements:
DNN architecture development
The first step starts by importing the libraries required for the script to execute as needed.Then, determine the regularization weight value that will be used.Developing the 3D temporal architecture is the next step in this process.The 3D temporal architecture consists of an encoder, processor, and decoder, as shown in the overall workflow in Figure 2. Following that, the architecture development process proceeds by compiling a specified layer to produce a model of Variational Autoencoder [19].
The final stage in constructing this architecture is to create the output model.
Data training process
The procedure for data training starting with importing the necessary libraries and loading the shapes based on the previously generated architecture model.Then input the data from both the test and train sets, an also shuffle it.Following the loading of the data sets, the procedure of establishing the loss function as well as determining the specifications for training is carried out.Several sub-steps must be completed in the process of determining training standards.Defining train data specification (epochs, batch size, learning rate), compiling the ADAM optimizer and loss function, calculating total loss, updating optimizer parameters, conducting training model iterations and model evaluation, and determining the directory for model output are all part of this process.The training step is maintained by iterating every single epoch and batch, with the model resulting from the training data being saved at the conclusion [19].
Process of data prediction and depiction
Importing the necessary library is the first step in the data projection and representation process.The steps are then completed by inputting test data to extract the test data anticipated by the model.Then, as a consequence of the preceding training procedure, load the trained model.The prediction procedure uses a combination of test data and the trained model.Afterwards, a plot of the projected results is generated to show the model's visualization [19].
Input Data Quality Analysis
A training dataset is required for an algorithm to carry out the learning process.A training dataset is one that is used to build a model during the "training" stage.After training using train data, a model must be evaluated with new data to determine its performance; this new data is generally referred to as test data.The test and training datasets in this study contains A and B data.The train_a and test_a data sets are made up of reservoir conditions (starting pressure, temperature, and formation thickness), geological model (permeability), and injection design (injection rate, injection time, and perforation thickness).Meanwhile, the data train_b and test_b are separate processing data that comes from the Eclipse (e300) program in the form of CO2 gas saturation data with 0 to 1 value.
DNN model analysis
The produced DNN model script can be assessed based on which layer or levels are used to construct the Deep Neural Network (DNN) architecture.The first is the 3D convolutional layer, which is used to execute convolution operations on three-dimensional input data.The following layer is reflection padding, which is applied to the input volume.When the convolution procedure is conducted, this layer retains the spatial dimension of the input.Then there's a batch normalization layer, which helps to speed up the training process by normalizing the input on each micro batch.The activation layer used next to activate the output from the prior layer.This DNN model was created with ReLU (Rectified Linear Unit) as its activation function.Following that is an add layer, which is responsible for implementing shortcut connections in the residual connection layer.Following that is an upsampling layer, which performs upsampling operations on the input data.
Furthermore, the hyperparameter values that can be examined from this DNN model include the amount of train and test data, the number of epochs, and the batch size.The number of train data used in this study is 700 for each of the train_a and train_b variables, while the test data used 300 data points for each test_a and test_b.There are three elements that influence the amount of data loaded in this training process.First, the train and test data are massive, preventing the computer from loading all of the data.Second, based on the availability and complexity of the data, the composition ratio of the amount of data used is 70:30.Third, the computer's limited ability to carry out the training process generated technical challenges when the procedure is loaded with greater quantities of data.Ideally, the more data that is used, the better the results will be.However, it must be tailored to the availability and complexity of the data, computer performance, and research duration.
The number of epochs chosen might also affect the training process and results.The greater the number of epochs used, the more features in the dataset used to train the model could acquire through appropriate repetition.Therefore, epochs 3 and 4 were used in this investigation.
In Regarding Figure 5 above, it can be seen that there are 6 convolution layers used in the encoder, then in the processor there are 8 residual convolution layers, and there are 6 deconvolution layers in the decoder section.Convolution layer 1 uses a filter with a size of (3,3,3) and a number of filters of 32.In this layer, it uses stride 2, which means the number of kernel shifts in the input matrix is 2.Then, in convolution layer 2, it uses a filter with size (3,3,3) for a total of 64 filters.This layer uses stride 1, which means the number of kernel shifts in the input matrix is 1.And so on until deconvolution layers.
Figure 6. Summary of DNN model architecture
According to Figure 6 above, there are 4 columns that summarize the composition of the DNN model that has been created.In the first column, there is a list of layers used; in the second columns, the size of the output produced; in the third column, the number of parameters; and in the fourth column, information about where the layers are connected.For example, the input image that has been previously defied is (96, 200, 24, 1).96 are pixels in the height dimension, 200 are pixels in the width dimension, 24 are depth dimensions, and 1 refers to the colour channel (if 1 is usually a grayscale image).Therefore, in convolution layer 1, the output shape becomes (None, 48, 100, 12, 32).This size is reduced from the input size because this layer uses filters with sizes (3,3,3) for a total of 32 filters and uses stride (2,2,2).Convolution layer 1 is connected to the previous input layer.Then, to calculate the number of parameters, you can use the following formula: Param = (kernel size × kernel size × kernel size × input channels + 1) × number of filters Thus, the calculation for the number of parameters in convolution layer 1 becomes as follows: Param = (3 × 3 × 3 × 1 + 1) × 32 = 896.
These calculations also applies to convolution layer 2 and beyond.
Qualitative analysis predicting the distribution of CO2 gas saturation
The code that was utilized in this study to predict the distribution of CO2 gas saturation is a modified version of one produced by Wen et al (2021), which may be accessed at the following website (https://github.com/gegewen/ccsnet_v1.0). Figure 7 to Figure 10 presents the results of this visualization, which differ significantly from the findings that are presented in the research paper provided by Wen et al [17].The visual representation of the distribution prediction results in the published appears identical to the numerical modelling results, demonstrating that the error is insignificant.However, in this case, the prediction results differed significantly from the original input.The differences in findings are attributable to variances in the different parameters employed.These distinctions include differences between the number of epochs, regularization weight values, batch sizes, and test numbers [19].This indicates that changing the hyperparameter has an effect on the prediction results.
Quantitative analysis predicting the distribution of CO2 gas saturation.
The application of parameters commonly seen in deep learning is required for quantitative analysis.The amount of reconstruction loss is the variable used in this investigation.Reconstruction loss is determined utilizing Mean Square Error (MSE) approaches to quantify the values of train and eval reconstruction loss [19].We only employed regularization weight of 10 -3 and 10 -5 in this study because of the results of the research revealed substantial differences between the two regularization weights.Figure 12 and Figure 13 show the calculated results with a weight regularization of 10 -5 .As the epoch increases, the value of the reconstruction loss is supposed to get lower.The ideal reconstruction loss value will be very close to zero.Generally, a model with three epochs will have a lower train and eval loss value as the number of epochs increases.The numerical amount of the train reconstruction is decreasing, according to the pattern.The trend in the eval reconstruction loss is similar, with the loss value getting smaller.Figure 12 further demonstrates that the eval reconstruction loss values is lower than the train reconstruction loss values.This signifies that the model outperformed the testing data [19].The model with four epochs has a much lower train and eval loss value as the number of epochs increases.The value of the train reconstruction loss is decreasing, according to the trend of the loss's worth.The eval reconstruction loss's value shows the same trend, with the loss value reducing.A further metric that could be used to calculate the amount of reliability of a model is the error value.The Root Mean Square Error, or RMSE for short, is a widely applied measurement of error calculations in deep learning models.To put it simply, the RMSE value is derived by taking the square root of the squared difference withing the predicted and actual test data values [19].These formulas result in the RMSE value, as shown in Table 1 below: Referring to the RMSE evaluation findings in Table 1, the model with a regularization weight of 10 -5 and 4 epochs has the smallest error value compared to the model with the same regularization weight and 3 epochs.Furthermore, the model with a 10 -3 regularization weight and 4 epochs has the lowest error values compared to the model with the same regularization weight and 3 epochs.When all models are compared, it is apparent that the model with a regularization weight of 10 -3 and 4 epochs has the lowest errors.In summary, when assessed against the remaining three models, the last one is most comparable to the numerical models.
Conclusion
As a consequence of this study, we can conclude that the model is suitable for estimating CO2 gas saturation distribution.This is demonstrated by the ensuing reconstruction loss value, which decreases as the number of epochs increases.In addition, the RMSE values measured varied between 0.4305 to 0.4530.To attain more accurate results, this model will need to be refined further.
Recommendations
According to the data analysis findings, the authors recommend that future research use actual field data as inputs in order to better depict actual conditions.The author suggests utilizing over 700 train data, 300 test data, and 4 epochs to assist the model better grasp the data.If the regularization weight is set to a value that is too high, the autoencoder may overfit the data and perform poorly on new data.And if the regularization weight is too low, the autoencoder may underfit the data and miss critical features.Therefore, according to this research, the author suggests a regularization weight of 10 -3 .
Figure 4 .
Figure 4. Workflow of data prediction and visualization addition to the amount of data and the number of epochs, the batch size during the training process can affect the training time of a DNN process.By processing multiple data at once, batching saves training time.The training process can be slower when the batch size is enormous.
Figure 10 . 3 In Figure 7 ,Figure 8 ,Figure 9 ,
Figure 10.The comparison of numerical simulation results, DNN prediction results, and error using epoch 4 and regularization weight 10 -3 In Figure 7, Figure 8, Figure 9, and Figure 10, for each of these pictures, three images are produced: numerical simulation results, DNN prediction outputs, and errors.In each image, the X axis represents distance, while the Y axis represents formation thickness.Furthermore, the range of saturation values is represented by a colour scale, with a value of 0 representing dark blue and a value of 1 representing yellow.
Figure 7 '
Figure 7's distribution visualization findings show that highest saturation highlighted in yellow on the left side.The saturation distribution then shifts to the right side, corresponding to the lower saturation value rising [19].Similarly, Figure 8's saturation distribution visualization findings show that the saturation distribution visualization findings show that the saturation distributed widens to the right side, with high saturation values on the opposite side.The anticipated model results in Figure 7 and Figure 8 seems similar at a first sight because the number of epochs used vary very slightly.Similarly, because the number of epochs varies slightly, the visual representation of the distribution in Figure 9 and Figure 10 typically appears comparable.
Figure 11 .Figure 11 '
Figure 11.The visualization of previous research prediction results [14] Figure 11's depicts the three categories of result images: the predicted result for 1.3 years, the expected result for 10.4 years, and the predicted result for 30 years.Each of these images sets has three output images: numerical simulation, CNN prediction results, and errors [19].
Figure 12 .
Figure 12.Reconstruction loss results on training and testing data with 3 epochs and a regularization weight of 10 -5
Figure 13 .
Figure 13.Reconstruction loss results on training and testing data with 4 epochs and a regularization weight of 10 -5
Figure 13 also
shows that the eval reconstruction loss is greater than the train reconstruction loss in the initial epoch.The first period demonstrates overfitting.
Figure 14 .
Figure 14.Reconstruction loss results on training and testing data with 3 epochs and a regularization weight of 10 -3Generally, the train loss and eval loss values of a model with three epochs decreases as the epoch number increases.The value of the train reconstruction loss decreases in line with the trend of its worth.The trend in the eval reconstruction loss value is comparable, with the loss value lowering.In addition, Figure14demonstrates that the eval reconstruction loss is larger than the train reconstruction loss in the initial epoch.The first period demonstrates overfitting[19].
Figure 15 .
Figure 15.Reconstruction loss results on training and testing data with 4 epochs and a regularization weight of 10 -3The model with four epochs has a much lower train and eval loss value as the number of epochs increases.The value of the train reconstruction loss is decreasing, according to the trend of the loss's worth.The eval reconstruction loss's value shows the same trend, with the loss value reducing.Figure15also shows that the eval reconstruction loss is greater than the train reconstruction loss in the initial epoch.The first period demonstrates overfitting.
Table 1 .
RMSE calculation results of the model | 4,743.2 | 2024-02-01T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Active shape programming drives Drosophila wing disc eversion
How complex 3D tissue shape emerges during animal development remains an important open question in biology and biophysics. In this work, we study eversion of the Drosophila wing disc pouch, a 3D morphogenesis step when the epithelium transforms from a radially symmetric dome into a curved fold shape via an unknown mechanism. To explain this morphogenesis, we take inspiration from inanimate “shape-programmable” materials, which are capable of undergoing blueprinted 3D shape transformations arising from in-plane gradients of spontaneous strains. Here, we show that active, in-plane cellular behaviors can similarly create spontaneous strains that drive 3D tissue shape change and that the wing disc pouch is shaped in this way. We map cellular behaviors in the wing disc pouch by developing a method for quantifying spatial patterns of cell behaviors on arbitrary 3D tissue surfaces using cellular topology. We use a physical shape-programmability model to show that spontaneous strains arising from measured active cell behaviors create the tissue shape changes observed during eversion. We validate our findings using a knockdown of the mechanosensitive molecular motor MyoVI, which we find to reduce active cell rearrangements and disrupt wing pouch eversion. This work shows that shape programming is a mechanism for animal tissue morphogenesis and suggests that there exist intricate patterns in nature that could present novel designs for shape-programmable materials.
Introduction
Epithelial tissues are sheets of tightly connected cells with apical-basal polarity that form the basic architecture of many animal organs.Deformations of animal epithelia in 3D can be mediated by external forces, either from neighboring tissue that induces buckling instabilities (e.g., [1][2][3]) or extracellular matrix that confines (e.g., [4]) or expands (e.g., [5]).Alternatively, local differences in mechanics at the apical and basal sides of the deforming epithelia itself can drive out-of-plane tissue shape changes (e.g., ventral furrow invagination in the Drosophila embryo (reviewed in [6]) and fold formation in Drosophila imaginal discs [7]).
Here, we describe a mechanism for generating complex 3D tissue shape involving tissue-scale patterning of in-plane deformations, analogous to the shape transformations of certain inanimate shape-programmable materials.
These shape-programmable materials, like hydrogels and nematic elastomers, experience spontaneous strains where the local preferred lengths change in response to stimuli in a desired way [8,9].Globally patterned spontaneous strains can create a geometric incompatibility with the original shape, triggering specific, desired 3D deformations, such as the formation of a cone from a flat sheet [8,10,11].Ideas from shape-programmability have already proved insightful to the understanding of differential growth-mediated plant morphogenesis [12,13].However, animal epithelia are more dynamic, changing cell shape and size, as well as rearranging tissue topology.As these behaviors cause in-plane changes in local tissue dimensions, the ingredients for shape-programmability are, in principle, present.
To test these concepts in animal morphogenesis, we quantify tissue shape changes and cell behaviors in the Drosophila wing disc during a 3D morphogenetic process called eversion (Fig. 1a).Through eversion, the wing disc proper, an epithelial mono-layer, undergoes a shape deformation in which the future dorsal and ventral surfaces of the wing blade appose to form a bi-layer and escape the overlying squamous epithelium called the peripodial membrane.After eversion, the wing disc begins resemble the final shape of the adult wing.This process is triggered by a peak in circulating levels of the hormone 20-hydroxyecdysone, analogous to an activator in shape programming.This complex tissue shape change is independent of forces external to the wing disc, as demonstrated by its ability to occur in explant culture [14].The shape changes of the disc proper also cannot be fully explained by removal of the peripodial membrane or extracellular matrix and appear to be self-sufficient, involving active cellular processes [15][16][17][18][19].
It has long been postulated that the eversion of wing (and leg) discs is achieved by in-plane cell behaviors that are organized by previously established cell morphology patterns [20][21][22][23].Here, we test this hypothesis by systematic quantification and genetic perturbation of cell behaviors during eversion and demonstrate how cell behaviors contribute to tissue shaping using a physical model analogous to shape programming.
The wing pouch undergoes anisotropic curvature changes during eversion
We first sought to quantify the tissue shape changes happening during wing disc eversion.To this end, we explanted wing discs at fixed time intervals, from late larval stage (wL3) to 6 hours After Puparium Formation (hAPF).We imaged the wing discs using multi-angle light sheet microscopy and then reconstructed and analyzed the 3D image stack (Methods 7.3).In this way, we capture the complex 3D shape changes happening throughout the wing disc during eversion (Fig. 1b, Supplemental movie 1).
The most dramatic tissue shape changes can be seen in a central crosssection along the axis perpendicular to the dorsal ventral boundary (DVB), referred to as "across-DVB" (Fig. 1d, Extended Data Fig.S1d,f,g).We observe three main morphogenetic changes: the peripodial membrane is removed around 4hAPF, the deeply folded regions unfold, and the pouch undergoes a transition from a monolayer dome to a flat bilayer with a sharply folded interface.In the perpendicular plane, taken through DVB in the pouch (referred to as "along-DVB"), the tissue does not change as significantly, preserving curvature in this direction (Fig. 1c, Extended Data Fig.S1e-g).
We focus hereafter on the pouch region, as it undergoes the most complex shape change: starting as an almost radially symmetric dome and ending up in a curved-fold shape, with curvature increasing strongly in one axis (across-DVB) but not as much in the other (along-DVB) (Extended Data Fig.S1f,g).
To test the hypothesis that in-plane cellular behaviors lead to 3D tissue shape change, we first build a shape-programmability model that can relate cellular behaviors to spontaneous strain.We then measure patterns of cellular behaviors in the wing pouch during eversion and use our model to test how they affect tissue shape change.
Fig. 1 The wing pouch undergoes anisotropic curvature changes during eversion: a, Schematic cross-sections along the long axis of the wing disc before and after eversion.Before eversion, the wing disc resembles an epithelial sac with apical facing inwards.The tissue consists of the Disc Proper (DP), which is a folded, thick, pseudo-stratified monolayer, and the Peripodial Membrane (PM), a thin squamous monolayer.After eversion, the PM is removed and the former pouch region of the DP forms the wing bilayer with apical facing out and dorsal and ventral on opposing sides.b, Example of a 3D segmentation of the DP in a head-on and side view before eversion (left, wL3) and after bilayer formation
Programmable spring network as a model for epithelial morphogenesis
We developed a coarse-grained model of tissue shape changes, leveraging an analogy between tissue remodelling by internal processes and spontaneous strain-driven shape programming of nematic elastomers [24][25][26].We use a double layer of interconnected programmable springs representing the apical surface geometry and the material properties of an epithelial sheet, including a bending rigidity introduced by the thickness of the double layer (Fig. 2a, Methods 7.9, 7.14).As an initial configuration, we use a stress-free spherical cap and then assign new rest lengths to the springs.In a continuum limit, this corresponds to introducing a spontaneous strain field λ(X), which depends on the spatial coordinates X (Methods 7.10).To simplify notation, we write λ for λ(X) hereafter.To generate a final output shape, we quasi-statically relax the spring network (Methods 7.9, 7.10).As with conventional elastic strain tensors, λ can be decomposed into isotropic (λ) and anisotropic ( λ) modes.
We first wanted to understand how simple choices of spontaneous strain patterns induce a shape change in our model.A simple gradient of λ, for example, causes the spherical cap to balloon in the center or generate wrinkles at the periphery (Fig. 2b.i. and ii).Changing the directions and gradients of λ leads to elongation of the cap, increase in the curvature at the tip or even flattening of the curvature in the center, eventually leading to a saddle shape (Fig. 2b.iii -viii.).
We propose that cell behaviors can give rise to a spontaneous strain field, thereby shape programming the wing disc pouch and driving 3D shape changes during eversion.The strains measured from observed cell behaviors during eversion (referred to as observed strains, λ * ) can be used to infer spontaneous strains.By using coarse-grained spontaneous strains, the topology of the spring network remains unchanged [27].For the isotropic component of observed strain, we focus on cell area changes (λ * A ), as cell division and cell death are minimal in the everting wing disc (Fig. 2ci) [14,28,29].The anisotropic components of observed strain capture contributions stemming from both changes in cell elongation ( λ * Q ) as well as from cell rearrangements ( λ * R ) (Fig. 2cii).
Our model can therefore relate cell behaviors to spontaneous strains in order to understand resulting tissue deformations.We now investigate these quantities in the everting wing disc.
Fig. 2 The programmable spring network relates cell behaviors to spontaneous strain to model epithelial morphogenesis: a, A thick spherical cap as a model for an epithelial tissue.We define a radial coordinate r and basis vectors e r , e ϕ , and e h .The thickness of the spring network h is constant everywhere and introduces a bending rigidity.The model tissue is an elastic medium implemented as a spring network with an initially stress-free state.We change the rest lengths of the springs by imposing a spontaneous strain field λ and allow subsequent relaxation to take a new 3D output shape.Top and bottom springs at any position in the lattice have their rest lengths updated by the same amount.The spontaneous strain field λ consists of an isotropic component λ and an anisotropic component λ.These components cause changes in area (A i to A f ) or area-preserving changes in shape (L i to L f ), respectively.b, Model realizations with simple patterns of spontaneous strains.For each realization, the input pattern of spontaneous strains is displayed above, with the magnitude of strain encoded by color.For anisotropic strain ( λ), the bars indicate the orientation.Below is the output shape.In b.i,ii, we vary the isotropic contribution λ and keep λ = 1, while in b.iii-viii, we vary λ and keep λ = 1.We probe the model output from input linear radial gradients in λ or λ, giving rise to cones with varying degrees of sharpness at the tip (i,v,vi,vii) or saddle shapes (ii,viii).Using a spatially homogeneous pattern of λ, we observe an elongated spherical cap when patterned along a fixed direction (iii) and a blunt cone when patterned radially (iv).c, Schematics showing the calculation of apparent spontaneous strains from observed cellular behaviors.c.iFor a patch of cells going from area of A i to A f , we extract λ. c.iiA patch of cells undergoing anisotropic deformation due to cell elongation changes or neighbor exchanges causes the length scale in one direction to change from L i to L f .From this change, λ is extracted, while λ = 1, as there is no isotropic contribution.
Topological tracking reveals spatial patterns of cell dynamics in the everting wing pouch
To examine cell behaviors, we first segmented apical cell junctions and plotted average cell area and cell elongation in space (Extended Data Fig.S2, Fig. 3a).
From larval stages, we know that cell morphology and behaviors in the pouch are organized radially in the region outside of the dorsal ventral boundary (outDVB) and parallel to the boundary in the region closest to the dorsal ventral boundary (DVB) [30][31][32][33].During eversion, we observe that cell shapes and sizes are patterned similarly.In early stages, cell area follows a radial gradient that disappears by the end of eversion (4hAPF) (Fig. 3a, Methods 7.7).Cell elongation exhibits a global nematic order through 4hAPF before disordering at 6hAPF (Fig. 3b, Methods 7.7).
To compare spatial patterns of cell behaviors over eversion time and across experiments, we define a coordinate system on the evolving 3D geometry.To this end, we use the cellular network topology to define the distance measure on the tissue surface.The topological distance between two cells is defined as the number of cells on the shortest path through the network from one cell to the other (See Extended Data Fig.S3a).We then use topological distance to define a coordinate system in the outDVB and DVB regions (Fig. 3c, Extended Data Fig.S3 and S4a,b, and Methods 7.8).The outDVB region consists of the dorsal and ventral halves, and we identify a single cell that defines the origin in each half (O D and O V ).In the DVB, we define the origin (O DV ) as a line of cells transversing the DVB.The topological distance k to the origin defines a radial topological coordinate in each region (Fig. 3c,d).
During eversion, tissue previously hidden in the folds becomes visible.In order to compare cell behaviors at different time points, we need to identify a region of tissue that remains in the field of view throughout eversion.To this end, we count the number of cells N ROI within the largest visible topological ring at wL3.The corresponding region of interest at later time points is then defined to be centered at the origin and containing the same number of cells.Since there are very few divisions and extrusions during eversion [14,28], and because cells cannot flow across the DVB [34,35], we expect that our regions of interest contain largely the same set of cells, and we refer to them as topologically tracked regions (Fig. 3d, Extended Data Fig.S5a,b).
Next, we quantify patterns of cell area (A) and radial cell elongation tensor (Q) as a function of topological coordinate k throughout eversion (See Methods 7.7).We find that our topological coordinate system recapitulates previously reported gradients in cell area and radial cell elongation at earlier larval stages (Extended Data Fig.S4c,d).In outDVB at wL3, we observe a cell area gradient that relaxes gradually until 4hAPF (Fig. 3e).At the same time, cell elongation develops a gradient, with cells in the periphery elongating tangentially (Fig. 3f).Between 4h and 6hAPF, cells dramatically expand their area and tangential cell elongation completely relaxes.We do not observe gradients in cell area or cell elongation in the DVB.Instead, cell area expands globally in the DVB during eversion, while cell elongation along the DVB first increases up to 2hAPF and then decreases at 4hAPF (Fig. 3e,f).
Using topological distance allows us to extract spatial patterns of oriented cell rearrangements from snapshots of eversion.Radially oriented rearrangements lead to a decrease in the number of cells per k, whereas tangentially oriented rearrangements lead to an increase (see Fig. 3g).As a consequence, k(N ROI ) changes based on the orientation and magnitude of rearrangements.
We find that k(N ROI ) increases with time (Fig. 3g), consistent with radially oriented cell rearrangements in outDVB and rearrangements oriented along the boundary in the DVB.
Together, these measured cell behaviors are a superposition of different radial patterns with the additional complexity of the DVB.Next, using our programmable spring model (Fig. 2), we ask how in-plane strains caused by these cell behaviors could drive 3D shape changes during eversion.
Active cell rearrangements drive tissue shape changes during wing pouch eversion
To be able to compare the output of the model to the 3D shape changes happening during eversion, we quantify the curvature and size dynamics of the apical surface of the wing pouch.We limit the analysis to the topologically tracked region and quantify the change in curvature from the wL3 stage along lines in the along-DVB and across-DVB directions (Fig. 4a, Extended Data Fig.S5, Methods 7.4).We focus on the stages between wL3 and 4hAPF, during which cell shape patterns have radial symmetry (Fig. 3a,b,e,f).We observe an overall curvature increase that is more pronounced in the across-DVB direction, peaking at the DVB, while flattening at the dorsal and ventral sides.
Next, we measure the strain field (λ * ) resulting from cell behaviors as a function of the distance from the origin, r or ρ (Fig. 4b, Extended Data Fig.S6, Extended Data Fig.S7, Methods 7.12).We quantify the isotropic component resulting from cell area changes (λ * A ) (Fig. 4b, Extended Data Fig.S7c).In the outDVB, we observe an area expansion (λ * A > 1) up to 2hAPF with a radially decreasing profile.In the DVB, we observe the buildup of a shallower gradient that is transiently paused from 0hAPF to 2hAPF.We first consider cell rearrangements as a possible source of spontaneous strain.When we only input λ R = {1, λ * R } as spontaneous strain in the model, it alone creates a strong curvature increase, resembling many features of the data but without increasing tissue size (Fig. 4a,d).Note that λ R also introduces a difference in curvature change between the two directions, across-DVB and along-DVB, at the final stage.
After relaxing the spring network to a force balanced state, stresses due to residual strains remain.The stresses corresponding to these residual strains can drive passive responses in cell behaviors.The residual strains appear as a mismatch of spontaneous strains (input to the model, λ) and strains resulting from changes in spring length during relaxation of the network, F (Fig. 4e.i,Methods 7.10).
When we calculate the residual strains generated by spontaneous strain from rearrangements, we find that the anisotropic component of the residual strain ( λres R ) is tangentially oriented (Fig. 4e.ii,Extended Data Fig.S10d).This tangentially oriented strain is similar to the pattern of cell elongation changes ( λ * Q )(compare Fig. 4b and 4e.ii), suggesting that these cell elongation changes are a passive response to spontaneous strain by rearrangements.To test this idea, we next consider cell elongation as possible source of spontaneous strain.When we only input λ Q = {1, λ * Q } as spontaneous strain in the model, we observe that the spring network shape flattens at the center rather than curve, and cell elongations themselves do not lead to any further residual strains (Fig. 4f,g, Extended Data Fig.S10c).This result is consistent with cell elongation changes being a passive response to cell rearrangements and not driving tissue shape change during eversion.
Cell rearrangements as spontaneous strains also lead to residual isotropic compression (λ res R , Fig. 4e.ii,Extended Data Fig.S10b).This residual could be compensated by spontaneous area change, which is also required by the observation that overall tissue size increases during eversion.
When we only input the isotropic strain, λ A = {λ * A , 1}, from observed cell area changes as spontaneous strain in the model, overall size increases with minimal curvature change (Fig. 4h).This result indicates that although cell area changes are an active behavior and lead to overall size increase, they do not significantly contribute to changes in tissue shape.However, there is a transient effect of cell area changes on tissue curvature at time point t2 (note dip in curve at t2 in Fig 4h).This transient effect in the scenario of spontaneous area strain only arises from the experimentally observed pause in cell area expansion in the DVB at 2hAPF as compared to the outDVB (Extended Data Fig.S7c, compare DVB and outDVB).This curvature difference disappears when the cell area in the DVB expands to match the outDVB at 4hAPF (Fig. 4h).Measuring λ res A , we find that cell area changes themselves create a small residual in the DVB (Fig. 4i, Extended Data Fig.S10).The anisotropic part of this residual could also contribute to the observed passive cell elongations.
Using these examples, we next infer the spontaneous strain patterns that drive tissue shape changes and govern cellular behaviors.We have found that both cell rearrangements and cell area changes are active and contribute to spontaneous strain.We therefore conclude that cell elongation is a passive elastic response and does not contribute to spontaneous strain.The total spontaneous strain, therefore, is composed of the anisotropic part of the observed strain due to rearrangements ( λ * R , Fig. 4b) and the isotropic part of the observed cell area changes (λ * A , Fig. 4b) compensated by the isotropic part of the residual strain due to cell rearrangements (λ res R , Fig. 4e.ii).When we input this total inferred spontaneous strain in the model, we find that we can account for the curvature and size changes observed in the everting wing pouch from wL3 to 4hAPF (compare Fig. 4j to Fig. 4a; see also Extended Data Fig.S9b, Supplemental movie 2).The patterns of residual strains generated by the model suggest that after eversion (at 4hAPF), cells experience elongation due to shear stress as well as area constriction due to compressive stresses (Fig. 4k, Extended Data Fig.S10f).
In summary, the good qualitative agreement between model output (Fig. 4j) and observed wing pouch curvature changes (Fig. 4a) indicates that the in-plane pattern of spontaneous strain by cell behaviors during eversion is sufficient to capture morphogenesis and that we have identified the most relevant active cellular events responsible for the pouch morphogenesis.Specifically, our data predict that altering cell rearrangements in the pouch should have a profound consequence for tissue shape change.We next test this prediction with a genetic perturbation.R (e), λ * Q (g), and λ * A (i) at the final eversion time point (λ, λ) and the resulting strain that is achieved after relaxation of the model, which can be isotropic (F ) and anisotropic ( F ). e.ii,g.ii,i.ii,Residual strain that remains at the final time point.The colors shows the magnitude of strain using the same range as indicated in Fig. 4b.Plots are split vertically to show the isotropic component (λ) on the left and the anisotropic component ( λ) on the right.j, k, Model output and residual strains for the inferred spontaneous strains, following the same procedure as in d-i.
Reduction of active cell rearrangements with
MyoVI knockdown results in a tissue shape phenotype Previous work in the wing disc pouch of earlier larval stages showed that cell rearrangements drive cell shape patterning [33].This work suggested that patterns of active cell rearrangements self-organize via mechanosensitive feedback mediated by MyoVI.We therefore next investigate whether MyoVI knockdown in the wing pouch (Extended Data Fig.S11a) alters cell rearrangements during eversion and leads to a tissue shape phenotype.Indeed, we observe that the MyoVI RNAi wing disc pouch fails to form a flat bi-layer after eversion, even though its initial shape is similar to that of wild type (wt) (Fig. 5a.i).This phenotype is best captured in the behavior of curvature in the across-DVB direction (Fig. 5a.ii).Here, the curvature decreases in the center, in contrast to wt, where it increases.In the along-DVB direction, the curvature remains unchanged over time in the MyoVI RNAi knockdown (Fig. 5a,b, Extended Data Fig.S11c,d).
Strikingly, other features of eversion, such as the opening of the folds and the removal of the peripodial membrane are unaffected by the MyoVI RNAi knockdown (Fig. 5a, see 4hAPF), indicating that the cause for the altered shape is pouch-intrinsic.This result further supports the idea that tissue shape changes in the wing pouch during eversion are independent of other morphogenetic events happening in the wing disc and instead rely on active cell behaviors in the pouch.
Next, we quantify cell behaviors in MyoVI RNAi .While initially the gradients in cell areas and elongation are similar to wt (Extended Data Fig.S11e,f), the inferred strains from individual types of cell behaviors λ * differ (Fig. 5c, Extended Data Fig.S12).From work in earlier larval stages, we expect oriented rearrangements to be reduced [33].Indeed, we find that MyoVI RNAi reduces the amount of radial cell rearrangements in the outDVB during eversion (Fig. 5c, Extended Data Fig.S11g).However, in the DVB, rearrangements are of opposite orientation as compared to wt eversion.Notably, we also see a complete lack of cell area expansion in the DVB (Fig. 5c, Extended Data Fig.S11e).
The pattern of cell elongations in the outDVB is similar to wt, but in the DVB it is of perpendicular orientation (Fig. 5c, Extended Data Fig.S11f).
Using the programmable spring model, we test how the reduction of spontaneous strain due to cell rearrangements affects tissue shape changes.When we input λ R = {1, λ * R } from cell rearrangements measured in MyoVI RNAi as spontaneous strain in the model, we see only a slight increase in curvature in the final time point in both along-and across-DVB directions (Fig. 5d).
Thus, we conclude that the reduction of cell rearrangements in MyoVI RNAi as compared to wt contributes to the abnormal tissue shape changes happening during eversion in MyoVI RNAi .We find that the anisotropic component of the residual strain ( λres R ) is small and tangentially oriented in the outDVB and radially in the DVB, similar to the cell elongation pattern (Fig. 5c,e, Extended Data Fig.S13d).If we input measured cell elongation changes as spontaneous strain in the model (λ Q = {1, λ * Q }), we do not recapitulate the observed tissue shape changes (Extended Data Fig.S14).This result suggests that the cell elongation changes in MyoVI RNAi are a passive response to spontaneous strain by cell rearrangements, as in wt.
While the change in spontaneous strain due to rearrangements captures a significant portion of the difference between the wt and MyoVI RNAi (compare Fig. 5b with Fig. 5d), it fails to recapitulate the finer progression of shape from wL3 to 4hAPF in MyoVI RNAi .In particular, the curvature at the final time point of the model calculation is not flattened in the center of the across-DVB direction, and the curvature increases slightly in both directions (Fig. 5b,d).
Thus, we proceed to input the observed cell area changes as spontaneous strains in the model (λ A = {λ * A , 1}).We find that they produce shape changes over time similar to those observed during MyoVI RNAi eversion, recapitulating both the decrease in curvature in the center of the across-DVB direction and the lack of curvature change in the along-DVB direction (Fig. 5f, Supplemental movie 3).We conclude, therefore, that the subtle flattening in the pouch center in MyoVI RNAi during eversion can be explained by the combination of cell area expansion in the outDVB with no area expansion in the DVB.This result highlights that, while cell area changes do not lead to a curvature change in wt, the difference in area expansion between the tissue regions results in the MyoVI RNAi shape.In addition, although we do not observe cell area expansion in the DVB, the area expansion in the outDVB creates residual strains in both regions (Fig. 5g, Extended Data Fig.S13b).These residual strains have an anisotropic component that, together with the residual strains from cell rearrangements, account for the measured cell elongation patterns in MyoVI RNAi (compare Fig. 5c,e.ii,g.ii)).
In sum, the results from the MyoVI RNAi perturbation validate the idea that the wing disc pouch deforms like a shape-programmable material.First, by locally perturbing MyoVI, we show that we can alter the normal tissue shape change, even though the tissue outside behaves normally, demonstrating that the shape change is tissue autonomous.Second, we show that reducing the active cell rearrangements in the pouch significantly alters the tissue shape outcome, consistent with our theoretical model.area changes (f ), while the other components (λ R or λA ) are set to 1. e.i,g.i,Input spontaneous strain ((λ and λ) at the final eversion time point and comparison with the resulting strain that is achieved after relaxation of the model (F and F ). e.ii,g.ii,Residual strain that remains from the difference between input and resulting spontaneous strain at the final time point, plotted in the same way as Fig. 4e,g,i,k.ii.
Discussion
In this work, we show that 3D epithelial tissue morphogenesis in the Drosophila wing disc pouch is based on in-plane spontaneous strains generated by active cellular behaviors.We develop a metric-free, topological method to quantify patterns of cell dynamics on arbitrarily shaped tissue surfaces, as well as a theoretical approach to tissue morphogenesis inspired by shape-programmable materials.These advancements together reveal the mechanics of tissue shape changes during wing disc eversion, showing that active rearrangements and active area expansion govern the 3D tissue shape and size changes.
We hypothesize that the organization of active behaviors during wing eversion arises from patterning during larval growth.First, the pre-patterned radial cell area gradient resolves during eversion, giving rise to a gradient of spontaneous strain in the outDVB.Second, the orientation of cell rearrangements follows that of earlier stages, indicating that the mechanosensitive feedback that was revealed in previous work is still active during eversion.Overall, this suggests a developmental mechanism through which mechanical cues at early stages organize cell behavior patterns that later resolve, resulting in a tissue shape change.Such behavior would resemble biochemical pre-patterning, in which cell fates are often defined long before differentiation.
Active, patterned rearrangements can robustly give rise to a specific target shape if the tissue is solid on the time scale of morphogenesis.Our work therefore reveals that the everting wing disc behaves as an elastic solid undergoing plastic deformation and demonstrates that the mere presence of rearrangements should not be taken as a sign of a fluid tissue with a vanishing elastic modulus.Many animal tissues with dynamic rearrangements could thus be in the solid regime and therefore be pre-patterned towards a target shape.Our work, inspired by shape-programmability of complex materials, reveals principles of shape generation that could be quite general.We therefore propose that many other morphogenetic events could and should be considered -and better understood -through the lens of shape-programmability.
Experimental model
All experiments were performed with publicly available Drosophila melanogaster lines.Flies were maintained at 25°C under 12hr light/dark cycle and fed with standard food containing cornmeal, yeast extract, soy flour, malt, agar, methyl 4-hydroxybenzoate, sugar beet syrup, and propionic acid.Adult flies were transferred to fresh food 2-3 times per week.Only males were studied for consistency and due to their smaller size.As wild type, we used the F1 offspring of a cross between w-;ecad::GFP and w;nub-Gal4,ecad::GFP;;.
Image acquisition and processing
Sample preparation: Wing discs of larval stages were dissected in culture medium as previously described [36], without surface sterilization or antibiotics.Prepupal stages from 0 to 6 hAPF required a slightly different dissection strategy.Prepupae were marked by the time of white Pupa formation and collected with a wet brush after the required time interval.Next, pupae were placed on a wet tissue, cleaned with a wet brush to remove residual food, and transferred into glass staining blocks (see Ref. [36]) filled with dissection medium.To dissect the wing disc, a small cut was performed with fine surgical scissors (2.5 mm, FST 15000-08, Fine Science Tools GmbH) at the posterior end, which creates a small hole to release pressure.This allowed for the next cut to be performed at half the anterior-posterior length, separating the anterior and posterior halves.Next, the anterior part of the puparium was first opened at the anterior end by administering a cut just posterior to the spiracles and then a second cut was performed on the ventral side along the PD-axis.
The pupal case was then held open with one forceps, and a second forceps in the other hand was used to remove the wing disc .To dissect wing discs from 0hAPF, the pupa is still soft enough to be turned inside-out after the cut that separates anterior and posterior halves, similar to larval stages.
LMA was prepared by mixing 1:1 of Grace's insect medium and a 2% LMA stock solution in water.Wing discs were transferred into mounting medium in U-glass dishes, aspirated into the capillary at room temperature, and imaged immediately after the LMA solidified.The imaging chamber was filled with Grace's insect medium (measured refractive index = 1.3424).For pupal stages, four imaging angles (dorsal, ventral, and 2 lateral) with 90°rotation were acquired; for larval stages, three imaging angles (dorsal and ventral in one, and 2 lateral) were acquired.Data from dual illumination was fused on the microscope using a mean fusion.
Multiview reconstruction: Multiview reconstruction was based on the
BigStitcher plugin in Fiji [37,38].Images were acquired without fluorescent beads, and multiview reconstruction was done using a semi-automated approach.Individual views were manually pre-aligned.Thereafter, precise multiview alignment was computed based on bright spots in the data with an affine transformation model using the iterative Closest Point (ICP) algorithm.
Next, images were oriented to show the apical side in XY and lateral in ZY.
Lastly, images were deconvolved using point spread functions extracted from the bright spots and saved as tif files with a manually specified bounding box.
Surface extraction of 3D images for visualization: Surfaces shown in Fig. 1 and Supplemental movie were extracted from 2hAPF and wL3 images.
To do so, we first trained a pixel classifier on the strong apical signal of Ecadherin-GFP of a different image of the same stage with napari-acceleratedpixel-and-object-classification [39,40].Feature sizes of 1-5 pixels were used to predict the foreground on the target image.Next, we used the pyclesperanto library [41] to select the largest labels and close gaps in the segmentation with the closing sphere algorithm.For additional gap-filling in the 2hAPF time point, we used vedo [42] to generate a pointcloud and extract the pointcloud density.When necessary, we applied some manual pruning of the segmentation in napari.We repeated this processing on the weak Ecadherin-GFP signal from the lateral membrane and subtracted the apical segmentation from the output.As a result, we achieved a full tissue segmentation that stops just below the apical junction layer.We then extracted the surface by the napari-processpoints-and-surfaces [43] library and applied smoothing and filling holes.The visualizations were generated using Paraview [44].Regions and directions of the cross-sections were annotated in Illustrator.Supplemental movies were created using paraview and Fiji [38].
Quantifying curvature of cross-sections
Tissue shape analysis was performed on multi-angle fused SPIM images.We used Fiji re-slicing tools to generate two orthogonal cross-sections along the apical-basal direction.Across-DVB is a cross-section along the center of the long axis of the wing disc.To find the center, we used the position of the sensory organ precursors and general morphology.The along-DVB cross-section follows the DVB and was identified by Ecadherin-GFP signal intensity.The apical pouch shape was outlined manually along both directions over the pouch region up to the HP-fold using custom Fiji macros.Subsequent pouch shape analysis was performed in Python.The tissue shape information was extracted form Fiji into Python using the Python 'read-roi' package.
The extracted apical shapes were aligned and rotated for each wing disc as follows.First, starting from the left-most point in the curve, we measure the arc length of the curve in the clockwise direction.The arc length of the ith point on the curve is given by where n is the number of points in the discrete curve and x i = (x i , y i ) is the position vector of the ith point.We keep s(i = 1) = 0.
Next, we define the center of the curve at the middle and offset the arc lengths to have s = 0 at the center.This leads to negative arc lengths on the left side of the center and positive arc lengths on the right side of the center (Extended Data Fig.S1, S5).
In order to compute a mean curve from different wing discs of the same developmental stage, we translated and rotated the curves (Extended Data Fig.S1b).We translate each curve by setting their midpoints as the origin (0, 0).To rotate the curve, we compute the center of mass of the curve.Then, we define the new y axis as the line that joins the center of mass to the origin.Finally, for each curve, we smoothen and interpolate between the discrete points using spline interpolation.We use the scipy.interpolate.UnivariateSpline function of scipy [45].To smoothen the spline, we define five knot points, one being the mid-point of the curve, and two others being at three-fourth and half of length from mid-point from either sides.Next, we compute the curvature of each curve using the following expression where ′ refers to the derivative with respect to the parameter of the curve, which is arc length in our case.
Finally, to compute an average curve, we get the average position vectors at arc lengths starting from a minimum arc length until a maximum arc length in intervals of 5µm.We do similar averaging for curvature values to get average curvature profiles.
To calculate the change in curvature, we normalized each curve from 0 to 1 and use a linear interpolation with 40 positions to subtract the initial from subsequent curvatures.We then re-introduced the average arc length for each each developmental stage for each of the normalized positions.
Segmentation of the apical junction network
To analyze cell shapes, we used four angles separated by 90°for the segmentation of early pupal stages, and a single angle for larval wing discs (Extended Data Fig.S2).Z-stacks from each imaging angle were denoised if necessary, by using the N2V algorithm [46], and the signal to background ratio was further improved by background subtraction tools in Fiji [38].We made 2D projections of the Ecadherin-GFP signal in the Disc Proper layer as previously described [47].Importantly, this algorithm also outputs a height-map image, which encodes the 3D information in the intensity of each pixel.The cells in the wing pouch were segmented using Tissue Analyzer and manually corrected [48].We chose a bond length cutoff of 2 pixels ( ∼ 0.46µm).The ventral side for 0 hAPF was excluded from the analysis, as at this stage, the ventral region is never fully in view from any imaging angle.The number of wing discs per time point and the images for each region are indicated in Table 1.Images were rotated to orient distal down.Height-map images were rotated accordingly using imagemagickTM software (ImageMagick Development Team, 2021).We used Fiji macros included with TissueMiner [49] to manually specify regions of interest (ROIs).The DVB was identified based on Ecadherin-GFP signal intensity [50] and the dorsal vs. ventral pouch by their positions relative to global tissue morphology.For larval stages, the DVB, dorsal, and ventral regions were identified in one image.For images showing lateral views of pupal stages, the DVB was identified, whereas for images showing the outDVB region, the dorsal or ventral region and the cells next to the DVB were labelled.The cells next to the DVB were required as a landmark for topological analysis but were otherwise not analysed separately.We then ran the TissueMiner workflow to 562 create a relational database.
563 We represent the configuration of the cellular network by positions of the cell 565 vertices, where three or more cell bonds meet, and their topological relations 566 as in TissueMiner [49].We extended TissueMiner to the third dimension using 567 the information extracted from height-maps, as described in Methods 7.5.Each cell α in the 3D network contains N α vertices v α i , defining the network 570 geometry.For every cell, we define a centroid R α , an area A α , and a unit 571 normal vector N α as where is the normal vector on the triangle formed by one edge of the cell and the vector pointing from the cell vertex to the cell centroid.It has a norm equal to twice the area of the triangle.
We then create a subcelluar triangulation by connecting the two consecutive vertices in every cell with its centroid {v i , v i+1 , R α }.This creates a complete triangulation that depends both on the vertex positions and the centroids of the cellular network.
Each triangle is defined by its three vertices {R 0 , R 1 , R 2 }, which define two triangle vectors E 1 , E 2 and its unit normal vector N These vectors also define the local basis on the triangle.Using the triangle vectors, we can define the area of the triangle and the rotation angles θ x and θ y that rotate a vector parallel to the z-axis of the lab reference frame to the vector normal to the plane of the triangle Here, arctan(x, y) is the element wised arc tangent of x/y, and N i is a component of the unit vector normal to the triangle plane.
For each triangle, we define the triangle shape tensor S 3d as a tensor that maps a reference equilateral triangle with area A 0 lying in the xy-plane, defined by the vectors vectors C i to the current triangle The vectors of the reference equilateral triangle are where the side length l = 4A 0 / √ 3 with A 0 = 1.
The triangle shape tensor S 3d can be written in terms of a planar state tensor S planar in the reference frame of the triangle as where R x (θ x ) and R y (θ y ) are rotations around the x and y axis, respectively.The angles θ x and θ y are defined in Eq. 5.The planar triangle state tensor, represented by a 3x3 matrix with the z components set to 0, can be decomposed as as in TissueMiner.Here, γ is a diagonal matrix with diagonal elements {1, −1, 0}, and R z is the rotation matrix around the z-axis.A is the area of the triangle, ∥ Q∥ the magnitude of the elongation tensor, ϕ the direction of elongation in the xy-plane, and θ z is the rotation angle around the z-axis relative to the reference unilateral triangle.The 3D elongation tensor Q in the lab reference frame and the elongation tensor in the xy-plane of the triangle Q planar are related by The magnitude of elongation is calculated as [51] ∥ Q∥ = arcsinh where ∥S ta ∥ and ∥S ts ∥ are the norms of the trace-antisymmetric and tracelesssymmetric part of the planar triangle state tensor S planar , respectively.The angle of the elongation tensor is given by where B ij are the components of the nematic part of the triangle state tensor S and arctan2(x1, x2) the inverse tangent of x1/x2, where the sign of x1 and x2 is taken into account.In this way, one can select the branch the multivalued inverse tangent function that corresponds to the angle defined by the point (x1, x2) in a plane.
We now define the cell elongation tensor as the area-weighted average of the corresponding triangle elongations where A α is the area of the cell, a t the area of a triangle that overlaps with the cell, and Q t is the elongation tensor of that triangle.
To calculate the radial component of the cell elongation tensor relative to the origin in cell α, we first define the radial direction.To this end, we use a 3D vector r connecting the origin to the cell centroid and we project its direction r = r/∥r∥ into the tangent plane of the cell, which defines the in-plane radial direction rtangent .The tangent plane of the cell is defined by its normal vector N defined in Eq. 3. We calculate the radial components of the cell elongation tensor as relative to the origin.
In the DVB, multiple cells form the origin.To calculate Q ρρ , the vector ρ connects the cell centroid to the averaged position of the topologically nearest cells of k = 0. We project its direction ρ = ρ/∥ρ∥ into the tangent plane of the cell α, which defines the in-plane direction ρtangent from DVB origin.We calculate the components of the cell elongation tensor as
Topological distance coordinate system
To calculate topological distances between any two cells, we determine the topological network using the python-igraph library [52].
In each of the tissue regions, we define separate origins: • outDVB region: To define the origin of the outDVB regions, we first determine the pouch margin cells as cells that live on the outermost row of the segmentation mask and do not overlap with the DVB ROI.Then, for each cell in the region, we calculate the shortest topological distance to the margin cells.This identifies the set of maximally distant cells that have the maximal shortest topological distance to the margin.The origin is then defined as the cell that is neighboring the DVB and is at the shortest metric distance to the averaged position of maximally distant cells.At larval stages, both dorsal and ventral sides of the outDVB region are visible, and an origin cell is defined on both sides.
• DVB region: We define the origin to consist of a line of cells transversing the DVB.At larval stages, the origin cells are defined as those cells within 2 distance to a straight line connecting the dorsal and ventral center cells.For pupal stages, the origin cells for the DVB are defined as the first row of cells next to the margin of the segmentation mask on the distal side.
The so-identified origin cells serve as the origin for the topological distance (k) for each cell in the tissue.In this way, k follows the radial direction along the surface for the outDVB and the path along the the DVB for the DVB.
3D visualization of cell properties:
We visualize cellular properties and cell elongation tensors on the 3D segmentation mask using paraview [44].
To plot a rank 2 tensor, like the cell elongation tensor, we take the largest eigenvalue of Q α as the norm of elongation and the corresponding eigenvector as the direction of elongation that we can plot to the surface.Note that for cells / patches that are reasonably flat, the eigenvector with the eigenvalue closest to zero is (almost) parallel to the normal vector on the patch.
Spatial analysis of cell properties:
We acquired data for 5 to 7 wing discs of each developmental stage.Images that were not of segment-able quality were excluded from the analysis.We average cell properties by k between dorsal and ventral for the outDVB and between images from both sides of the DVB.We used a cell area-weighted average for elongation.The 95 % confidence interval and the statistic mean for each developmental stage is calculated via bootstrap re-sampling with 10.000 repeats.
Mechanics of the programmable spring lattice
We use a programmable spring lattice in the shape of a spherical cap to model the wing disc pouch, which is an epithelial monolayer.
Approximating the wing disc pouch as a spherical cap: We calculate the average radius of curvature of the apical side of the wing disc pouch at wL3 stage in the topologically tracked region as R = 77.66µm.The angular size of the spherical cap, denoted by θ M , is given by where w DV is the width of the DVB and w ODV is the average in-surface distance from the DVB to the periphery of the outDVB region (Extended Data Fig.S8a).We calculate w DV = 15µm and w ODV = 59.77µm.Using these calculated dimensions, we determine θ M = 49.63 • .
Generating the lattice : We first generate a triangular lattice in the shape of a hollow sphere, keeping the radius of curvature R calculated above.This lattice was obtained using the function meshzoo.icosasphere available in the Python package Meshzoo (www.github.com/meshpro/meshzoo).In this function, we set the argument ref ine f actor = 30, which leads to edges of length 3.11 ± 0.18µm.This edge length was found to be small enough to prevent computational errors in the simulations of this study.We then cropped the spherical lattice to obtain a spherical cap of angular size θ M (calculated above, Extended Data Fig.S8b).Next, we place a second layer at the bottom of this lattice at a separation of h.This new layer is identical to the original lattice in terms of the topology of the lattice network but is rescaled to have a radius of curvature of R − h.We connect the two layers with programmable springs using the topology shown in the inset of Extended Data Fig.S8c.The lattice obtained this way represents an elastic surface of thickness h, which can be changed to tune the bending rigidity of the model.Vertices typically have 13 neighbors (6 on their own layer and 7 on the other layer).However, six to eight vertices out of about 3220 vertices in the whole network form point defects.
These vertices have 11 neighbors.
In order to remove any possible effects coming from the lattice structure (angle of edges or degree of connectivity), we perform simulations for each condition by taking spherical caps from 50 different regions of the sphere and averaging the result.We see only very small variability in the final shape, quantified by the standard deviation of the curvature change profiles in our model results.Thus, we conclude that the lattice structure does not affect our results.
Elastic energy of model: The edges of the lattice act as overdamped elastic springs with rest lengths equal to their initial lengths.Hence, the model is stress-free at T = 0 where a denotes a single spring; ∆X a denotes the spring vector given by X β − X α , where α and β are the vertices at the two ends of spring a and X α denotes the position vector of vertex α.During a consequent time step T , the rest length of spring a (δ a T ) can differ from its current length δ.The elastic energy of this state for the whole lattice is given by where the sum is over all springs of the network and k represents the spring constant.At each computational time step T , the model tries to find a preferred configuration by minimizing W, hence T acts as a "quasi-static time step".To minimize the energy of the model at a given T , we use overdamped dynamics with smaller time steps τ , which restart for each new quasi-static time step T .
Here, γ represents the friction coefficient.x α corresponds to the current position of the vertex α. δ a is the length of the springs connected to vertex α. δa = (x α − x β )/δ a = (∆x a )/δ a represents the unit vector along the spring a that connects vertices α and β.
We relax the model at each quasi-static time step T to achieve force balance by updating the positions of the particles using where dτ k γ was set to 0.01 (ensuring no numerical artifacts).
The particles were moved until the average movement of the particles ⟨∥x α (τ + dτ ) − x α (τ )∥⟩/R reduced to 10 −9 , where R is the radius of curvature of the outer surface of the spherical cap in the initial stress-free state.
Spontaneous strain tensor
Tissue shape change during development is modelled in this work as the appearance of spontaneous strains, a change in the ground state of local length scales.This notion can be captured with a spontaneous strain tensor field, λ(X), a rank 2 tensor.Each component corresponds to the multiplicative factor by which the rest lengths of the material changes in a particular direction.In some general coordinate system, we can write λ as We choose the coordinate system so that it aligns with our desired deformation pattern.In this case, λ(X) is in a diagonal representation : where the basis vectors are chosen such that e 1 , e 2 are surface tangents while e 3 is surface normal.In general, we keep λ 33 = 1, since we do not input any spontaneous strains along the thickness of the model.
The surface components of λ can be further broken down into isotropic and anisotropic components.Isotropic deformation changes the local area of the surface by changing the local lengths equally in all directions.Anisotropic deformation increases the local length in one direction while decreasing the local length in the other direction so as to preserve the local area.Thus, we decompose the deformation as a product of isotropic and anisotropic contributions.
Then, the spontaneous deformation tensor can be written as Finally, as λ is a field, each of the components in the above equation generally depend on the location on the surface, X.
Discretizing λ: As our spring lattice is discrete in nature, we use the following strategy to discretize λ.For a single spring, λ is an average of the value of λ on the two ends of the springs.
where α and β are the two vertices of the spring a.
Assigning new rest lengths to springs: The initial length of spring a connecting vertices α and β is given by To assign new rest lengths, we use Note that we assign a new rest length to any spring a based on the positions of its vertices (X α and X β ), independent of the layer in which these vertices lie (top and bottom).
Implementing shape change over time: We increase the spontaneous strain slowly to model the slow build up of stresses due to cell behaviours.
Hence, we first calculate the target rest length of springs (δ a F ).At each time step, we assign a rest length δ a T and minimize the energy of the model.We increase δ a T in a simple linear manner from where T F is the number of quasi-static time steps in which the whole simulation takes place.Note that within each time step, the lattice is brought to a force balance state.The simulations were performed for different choices of T F (1, 2, 5), but we found that the differences in output shapes were undetectable.Still, T F = 5 was chosen to simulate the slow appearance of spontaneous strains.
Measuring resulting strains in model: In our spring model, displacements are defined by positions of vertices and we define the deformation gradient tensor F α at each vertex α of the network.
For each spring a emerging from the vertex α, the deformation gradient tensor should satisfy However, F α contains 9 degrees of freedom, while there are 13 springs for each vertex and therefore 13 independent equations to be satisfied.Note that six to eight vertices out of about 3220 vertices in the whole network form point defects and thus have 11 springs.Therefore, we define F α as the tensor that best satisfies conditions in Eqs. 30 by minimizing the sum of residuals squared This is an ordinary least squares (OLS) problem split into three independent basis vectors.We solved this OLS using the Numpy method numpy.linalg.lstsq in cartesian coordinates [53].We then express F in the coordinate system corresponding to vertex α in the model explained above.From this, we calculate the isotropic (F ) and anisotropic ( F ) components using Finally, we compute λ res as The isotropic (λ res ) and anisotropic components ( λres ) of λ res are calculated in the same way as for F .
Nematic director pattern on spherical surface
In the initial state of the model, we specify a coordinate system on the spherical surface in different regions (outDVB and DVB).These coordinate systems are chosen such that the observed nematic patterns of spontaneous strains ( λ) align with the major axes of the chosen coordinate systems.
We first define the origins in our model similar to the origins defined in the data (Extended Data Fig.S8).To do so, we first measure θ D V .The coordinates of O D and O V are then given by (±R sin(θ DV /2), 0, R cos(θ DV /2)) in the cartesian coordinate system.The center for the DVB region is given by the line O DV which joins O D and O V .
In the outDVB region, we have a coordinate system in which the basis vectors are given by e r , e ϕ , e h (Extended Data Fig.S8c).e h is simply the normal vector on the spherical surface.To calculate e r at a point, we draw a vector from the origin in this region (O D or O V ) to the point.We then take a projection of this vector onto the tangent plane of the surface and normalize it to give us a unit vector.In this way, we calculate e r as a surface tangent vector emanating radially outwards from the origins of the outDVB regions.e ϕ is then the direction perpendicular to e r and e h .For each point in the outDVB region, we calculate the geodesic distance between the point and the center point of its region.We then normalize this distance by the maximum geodesic distance from the center calculated in this region.This gives us a scalar coordinate r which varies from 0 to 1.
In the DVB region, the basis vectors are given by (e ρ , e w , e h ) (Extended Data Fig.S8c).e h is simply the normal vector on the spherical surface.To calculate e ρ at a point, we draw a vector from the nearest point on O DV to the point.We then take a projection of this vector onto the tangent plane of the surface and normalize it to give us a unit vector.In this way, we calculate e ρ as a surface tangent vector emanating outwards from the center line of the DVB region as well as parallel to the DVB.e w is perpendicular to e ρ and e h .For each point in the outDVB region, we calculate the shortest distance between the point and the center line of the DVB region.We then normalize this distance by the maximum distance from the center line in the DVB region.This gives us a scalar coordinate ρ.
Here, t corresponds to an initial developmental stage, and t + ∆t corresponds to a later developmental stage.A refers to the average cell area evaluated at N .
Observed strain due to cell elongation change: Each cell is given a cell elongation tensor Q that is the average of further subdivisions of the cell polygon into triangles (Methods 7.7).Each triangle can be circumscribed by an ellipse, the centroid of which coincides with the centroid of the triangle.
According to [54], the length of the long axis of the ellipse is given by l = r o exp(∥ Q∥), where r o is the radius of a reference equilateral triangle.The length of the short axis of the ellipse is given by s = r o exp(−∥ Q∥).The axes of the ellipse match with the radial and tangential directions if the off-diagonal components Q rϕ or Q ρϕ are approximately 0. This was the case for our data as well.The length scale associated with the radial direction is l if Q rr or Q ρρ is positive and s if Q rr or Q ρρ is negative.Thus, we get a measure of the length scales along the radial direction, which we denote by L and is given by L = exp (σ∥ Q∥), (36) where σ is the sign of Q rr or Q ρρ .
We then average L within each ring and compute a ratio of the length scales along the radial direction between two developmental stages by computing λ * Q (N ) = L(N, t + ∆t) L(N, t) .
Observed strain due to cell rearrangements: Rearrangements lead to anisotropic deformation of the tissue.In our topological coordinate system, radially oriented rearrangements lead to an increase in the number of rings needed to accommodate some fixed number of cells (Extended Data Fig.S6).
Similarly, tangential rearrangements would lead to a decrease in the number of topological rings.Thus, by measuring the change in the number of rings needed to accommodate some fixed number of cells, we can estimate the deformation due to the net effect of radial and tangential rearrangements.
In a tissue region at developmental stage t, let us consider a single ring with index k and cumulative number of cells N .Ring k contains ∆N cells given by N (k, t) − N (k − 1, t).By construction, the number of rings needed to contain ∆N cells at location N is given by n(N, t) = 1.For a later developmental stage, t + ∆t, we estimate n(N, t + ∆t) which is the number of rings that contain ∆N cells at the location N .This is done by taking the difference between k values evaluated at t + ∆t and at locations N (k − 1, t) and N (k, t) (see also Extended Data Fig.S6) n(N, t + ∆t) = k(N (k, t), t + ∆t) − k(N (k − 1, t), t + ∆t).(38) As n(N, t) and n(N, t + ∆t) are measures of the number of topological rings, they represent the radial topological length scales that change due to cell rearrangements.Thus, the strain due to cell rearrangements is quantified by λ * R (N ) > 1 represents radial extension of the tissue due to radially oriented rearrangements, while λ * R (N ) < 1 represents tangential extension.
Observed strain due to combination of cell elongation change and cell rearrangements: The combined strain due to cell elongation change and cell rearrangements is given by λ * (N ) = λ * Q (N ) λ * R (N ).(40)
Data availability
Imaging and model data are available upon request without restriction.
Code availability
Uncited code is available upon request without restriction.
Fig. 1 The wing pouch undergoes anisotropic curvature changes during eversion: a, Schematic cross-sections along the long axis of the wing disc before and after eversion.Before eversion, the wing disc resembles an epithelial sac with apical facing inwards.The tissue consists of the Disc Proper (DP), which is a folded, thick, pseudo-stratified monolayer, and the Peripodial Membrane (PM), a thin squamous monolayer.After eversion, the PM is removed and the former pouch region of the DP forms the wing bilayer with apical facing out and dorsal and ventral on opposing sides.b, Example of a 3D segmentation of the DP in a head-on and side view before eversion (left, wL3) and after bilayer formation (right, 2hAPF).Pouch: blue; across-DVB: solid line; along-DVB: dashed line.c,d, Representative images for stages of eversion.Wing discs are labelled with Ecadherin-GFP.The pouch region is highlighted, colored by time.c, Projection view showing the dorsal side for early pupal stages and dorsal (down), ventral (up), and DVB for wL3.The position of the DVB is indicated with a dashed line.d, Across-DVB cross-section.The position of the DVB is indicated in white.Asterisk shows the rupture point of the PM, which gets removed around 4hAPF.Minimum 5 wing discs were analyzed for each time point; hAPF = hours after puparium formation; wL3 = wandering larval stage, 3rd instar; scale bars = 100 µm.
Fig. 3
Fig. 3 Topological tracking reveals spatial patterns of cell size changes, cell elongation changes, and cell rearrangements in the everting wing disc pouch: a,b,d, Cell measurements highlighted on the surface of representative examples of everting wing discs over time.At wL3, the full pouch is visible, whereas only the dorsal side is shown here for early pupal stages.a, Cells are colored by apical cell area.b, Bars highlight the orientation of locally averaged cell elongation Q (projected onto 2D), and color indicates the elongation magnitude |Q| averaged over patches of size 350µm 2 .c, Segmentation of a wL3 pouch; the origins for the topological coordinates k (O D and O V in the outDVB region and a center line O DV for the DVB) are highlighted in dark blue.The arrows indicate the direction of spatial coordinates that result from these origins, transversing along the DVB and radially for the outDVB.The inset shows the center region of the same wing pouch, where each cell up to k = 5 is colored by k.The origin cells are at k = 0 (dark blue).Note that the origin is a single cell for each side of the outDVB and a line of cells for the DVB.Due to the 3D nature of the wing pouch, the topological coordinate system is defined in one view for larval stages and in 4 separate imaging angles for early pupal stages (see schematic, right).d, The maximum k depends on the size of the segmented region (upper row).For the topologically tracked region, the maximum k may change due to rearrangements and is denoted k(N ROI ) (lower row).e-f, Cell area (e) or cell elongation (f) spatially averaged over k (minimum five wing discs per stage).Dorsal and ventral are averaged together into 'outDVB'.Geometric representations (top) show outDVB as half-circles and the DVB as a central rectangular box.e, Geometric representations highlight cell area gradients (A/⟨A⟩) within each stage.Lower panels show the cell area (A) as a function of k for all time points.f, The component of cell elongation (Qrr for cells in the outDVB and Qρρ for cells in the DVB) is calculated relative to the origin for each cell of the respective region.This makes Qrr the radial component of cell elongation, whereas Qρρ is effectively the cell elongation along the DVB (cartoon insets on the lower panel, see also Methods 7.8).Qrr and Qρρ are calculated as a function of k.In the upper panel, magnitudes are represented by color.g, Schematic (top) showing how we estimate cell rearrangements using topology.Each circle represents a cell in the outDVB region of the wing disc, colored by topological distance at the initial time point.If the number of cells per k decreases, the deformation by rearrangements is radial.Plots (bottom) show the number of cells in the wing disc pouch N contained within k.The horizontal line shows N ROI for the wL3 stage; the vertical lines show corresponding (k(N ROI )) for each stage.In e-g, solid lines indicate the mean, and ribbons show 95% confidence of the mean.
The contribution to the anisotropic component of λ * from changes in cell elongations λ * Q is small compared to the contribution by cell rearrangements λ * R (Fig. 4b, Extended Data Fig.S7a,b,d, Methods 7.11).While λ * Q is tangential, following a shallow gradient, λ * R is radial and increasing with the distance from origin in the outDVB and decreasing in the DVB.We next use the programmable spring model to test how the observed inplane cellular behaviors can cause tissue shape changes.We define the DVB and outDVB regions in the model, matching their relative sizes in the wing pouch (Fig. 4c, Extended Data Fig.S8).For each individual cell behavior and measured time point (wL3, 0hAPF, 2hAPF, and 4hAPF), we use the inplane strain λ * that we infer from each observed class of cell behaviors as examples of spontaneous strain λ.We use these to program the spring lengths in the model.For each insertion of spontaneous strain (model time points: initial, t1, t2, final, corresponding to the experimental time points), we relax the spring network quasi-statically to a force balanced state (Methods 7.9, 7.10).As the effective bending modulus of the wing disc is experimentally inaccessible, we fit the thickness of the model in an example scenario where all observed cell behaviors are input as spontaneous strains and use the same thickness thereafter (Extended Data Fig.S9a, Methods 7.14).
Fig. 4
Fig. 4 Inputting measured and inferred strains as spontaneous strain in the programmable spring model show that active rearrangements and cell area changes drive pouch morphogenesis: a, Overlay of a wL3 (white) and a 4hAPF (cyan) wing pouch (left) and plots of the average change in tissue curvature in the topologically tracked region in across-DVB (middle) and along-DVB (right) directions.b, Observed strain from cellular behaviors between time points wL3 to 4hAPF as a function of normalized distance from origin r and ρ.Observed strains arise from (left to right): rearrangements ( λ * R ), cell elongation changes ( λ * Q ), and cell area changes (λ * A ). Half circles indicate the outDVB region; the rectangular box indicates the DVB.The color represents the magnitude of different strains; the bars visualize the direction of observed strain for λ * R and λ * Q .c, The model coordinates are designed to match the geometry of spatial patterns in the wing disc pouch (See also Fig. 3c).d,f,h Observed in-plane strain from rearrangements (d, λ * R ), cell elongation changes (f, λ * Q ), and cell area changes (h, λ * A ) are inserted in the model as spontaneous strains by a change in rest length of the springs (δ o /δ * ).To compare the initial and final stages (corresponding to wL3 to 4hAPF), the model cross-section shows the shape in the across-DVB direction.The change in curvature of the model outcomes are plotted for all time points (right) in the across-DVB and along-DVB directions.The initial shape is a spherical cap with a radius resembling the wL3 stage.t1, t2, and final stages are the model results from the change in strains by 0, 2, and 4hAPF.λ contains observed strains from the individual measured cell behaviors, while the other components (λ or λ) are set to 1. e.i,g.i,i.i,Input spontaneous strain for λ * R (e), λ * Q (g), and λ * A (i) at the final eversion time point (λ, λ) and the resulting strain that is achieved after relaxation of the model, which can be isotropic (F ) and anisotropic ( F ). e.ii,g.ii,i.ii,Residual strain that remains at the final time point.The colors shows the magnitude of strain using the same range as indicated in Fig.4b.Plots are split vertically to show the isotropic component (λ) on the left and the anisotropic component ( λ) on the right.j, k, Model output and residual strains for the inferred spontaneous strains, following the same procedure as in d-i.
Fig. 5 MyoVI
Fig. 5 MyoVI RNAi alters active cell behaviors and results in a tissue shape phenotype: a, MyoVI RNAi phenotype during eversion (scale bars = 100 µm).Representative across-DVB cross-sections (a.i) and comparison of apical shape between MyoVI RNAi and control (a.ii).b, Overlay of a wL3 (white) and a 4hAPF (cyan) MyoVI RNAi wing pouch (left) and plots of the average change in tissue curvature in the topologically tracked region for across-DVB (middle) and along-DVB (right) directions.c, Observed strain from cellular behaviors in MyoVI RNAi wing discs between time points wL3 to 4hAPF.Plots are split vertically with the observed strains for wild type (WT) for comparison on the left and MyoVI RNAi on the right.Measured strains come from λ * R , λ * Q , and λ * A .Quarter circles indicate the outDVB region, and the rectangular box indicates the DVB.The color represents the magnitude of different strains; the bars indicate the direction of observed strain for λ * R and λ * Q .d,f, Observed in-plane behaviors are inserted in the model as spontaneous strains by a change in rest lengths of the springs (δ o /δ * ).The initial stage is a spherical cap with the radius taken to resemble the shape at the wild type wL3 stage.t1,t2, and final stages are the model results after a change in spring rest length according to observed strains from 0,2, and 4hAPF for MyoVI RNAi .λ contains observed strains from rearrangements (d) or
568 7 . 7
Measurement of cell area and cell elongation tensor 569
For
the simple examples presented in Fig 2c (except Fig 2c.iii), θ DV was set to be 0 to have a simple radial coordinate system.For Fig 2c.iii, θ DV > θ M .7.12 Extracting the strain pattern from segmented imagesTo quantify the strain due to different cell behaviors along the basis vectors of the chosen coordinate system, we compare cells within topologically tracked bins between two different developmental stages.Tracking location between developmental stages: We leverage the topological distance coordinate system to track locations between discs.Each topological ring k is given a value N which denotes the cumulative number of cells from the topological origin defined in each region (O D , O V , and O DV ).We use N to track the location in our static images of different discs at different developmental stages.Observed strain due to cell area change: Cell area scales with square of the distance between cell vertices.Thus, the factor by which the local lengths change in all directions is given by λ * A (N ) = A(N, t + ∆t) A(N, t) . | 16,578.8 | 2024-01-24T00:00:00.000 | [
"Materials Science",
"Biology",
"Physics"
] |
Direct Measurements of Local Coupling between Myosin Molecules Are Consistent with a Model of Muscle Activation
Muscle contracts due to ATP-dependent interactions of myosin motors with thin filaments composed of the proteins actin, troponin, and tropomyosin. Contraction is initiated when calcium binds to troponin, which changes conformation and displaces tropomyosin, a filamentous protein that wraps around the actin filament, thereby exposing myosin binding sites on actin. Myosin motors interact with each other indirectly via tropomyosin, since myosin binding to actin locally displaces tropomyosin and thereby facilitates binding of nearby myosin. Defining and modeling this local coupling between myosin motors is an open problem in muscle modeling and, more broadly, a requirement to understanding the connection between muscle contraction at the molecular and macro scale. It is challenging to directly observe this coupling, and such measurements have only recently been made. Analysis of these data suggests that two myosin heads are required to activate the thin filament. This result contrasts with a theoretical model, which reproduces several indirect measurements of coupling between myosin, that assumes a single myosin head can activate the thin filament. To understand this apparent discrepancy, we incorporated the model into stochastic simulations of the experiments, which generated simulated data that were then analyzed identically to the experimental measurements. By varying a single parameter, good agreement between simulation and experiment was established. The conclusion that two myosin molecules are required to activate the thin filament arises from an assumption, made during data analysis, that the intensity of the fluorescent tags attached to myosin varies depending on experimental condition. We provide an alternative explanation that reconciles theory and experiment without assuming that the intensity of the fluorescent tags varies.
Parameter estimation
In order to model the measurements of Desai et al. [4], we had to estimate four parameters: σ N the standard deviation of the assumed Gaussian background noise; σ F the standard deviation of the assumed Gaussian temporal fluctuation in GFP intensity; e the fluorescent emission of a GFP; and ε which determines the degree of regulation. All parameters, except ε, were estimated prior to fitting the data with the model. We varied ε in order to optimize the fit of the model to the data.
We estimated the background noise, σ N , individually for each experimental condition. The values used in the model are listed in Table 1 of the main text. To estimate σ N from a kymograph, we first removed spatial fluctuations in intensity. To do so, we calculated the minimum fluorescence at every pixel along the actin filament. We then fit a fifth-order polynomial to this minimum fluorescence as a function of position along actin. The resulting curve defined zero fluorescence for every pixel along actin.
We then plotted a histogram of the fluorescence of every pixel in a kymograph. The data show a peak, which corresponds to the background noise, and an extended tail toward positive fluorescence, which corresponds to the binding of fluorescent myosin (GFP-S1). We estimated the standard deviation of the histogram by matching a Gaussian to the distribution to the left of the mean, thereby only fitting the background noise (see Fig. 1A).
To estimate σ F and e, we analyzed a histogram, which plots the frequency of spots of a given intensity, measured under conditions where we expected to see primarily single molecules binding (pCa 6, [ATP] = 0.1 µM, [Myo] = 1 nM). The maximum of this peak was around I 1 = e/f = 45 intensity units (Fig. 1B). Given that these data were collected at a frame rate of f = 10 Hz, this gives a value of e = f I 1 = 450s −1 .
Since the histogram is constructed by fitting the raw data with a Gaussian, we expect that the contribution of σ N to the signal is small. One might then expect the histogram to be Gaussian with standard deviation σ F , but the histogram is not exactly Gaussian. This is likely due to imperfections in the fitting algorithm used to construct the histogram. However, the histogram is approximately Gaussian near the peak, so we estimated σ F = 0.22 from matching a Gaussian to this peak. In support of the view that the non-Gaussian shape of the histogram is due to the fitting algorithm, and that the width of the peak can be reasonably used to estimate σ N , simulations with only single binding events generate histograms that are similar to observations (Fig. 1B). Hz. The spread in the histogram near the peak is due to temporal fluctuations in intensity, and is well-fit by a Gaussian with standard deviation σ F = 0.22 (blue), in units of scaled intensity. Deviations from the Gaussian are likely due to due to the algorithm used to determine the fluorescent intensity of a spot, since simulations with only single binding events generate histograms that are similar to observations (gray).
To estimate ε, we fit the data [4]. A series of preliminary simulations suggested that ε ≈ 0.06 fit the data at pCa 6 the best. To quantify this, we performed simulations of the experiments at variable [Myo] (pCa 6, ATP = 0.1µM, [Myo] = 1, 5, 10, 15nM, 1000 frames collected at 10Hz), with ε = 0.04, 0.06, and 0.08. We compared both the distributions of clusters ( Fig. 2A), and mean fluorescence per pixel (Fig. 2B). In all cases, ε = 0.06 was the best (Fig. 2C). We therefore used this value for pCa 6. Since there was only a single measurement at pCa 5 and pCa 7, we did not perform as detailed an analysis, but rather estimated ε = 0.01 and ε = 0.4, respectively, from preliminary simulations.
Average fluorescence
In the main text, the comparison between model and data is presented qualitatively in Figure 4A-C, where simulated and measured kymographs are displayed side-by-side. To obtain a more quantitative comparison, we determined the average fluorescence per pixel in each kymograph. In the main text, we present only measurements at variable [Myo] (Fig. 4D). All measurements are shown in Fig. 3. The agreement between simulation and measurement is good.
3 More details of constant vs. variable GFP emission 3
.1 Desai et al's analysis
The model of local coupling between myosin molecules [1,2,3] assumes that a single myosin can activate the thin filament; Desai et al [4] conclude that two myosin heads are required to activate the thin filament. Desai Figure 3: Plots of average pixel fluorescence for kymographs measured under various conditions. The left plot, at variable myosin, can be found in the main text as Fig. 4D. The other two plots show variable ATP and variable calcium. The model (solid dots, SD error) reasonably agrees with the data from Desai et al. [4] (hollow dots). Note that the lowest ATP measurement (middle plot) was simulated at [ATP]=0.15 µM, as discussed in the main text. For each pixel, zero fluorescence is defined as the minimum value it achieves during the kymograph. In each plot, the apparent background fluorescence is indicated as a dashed line. Fluorescence is measured in scaled intensity, defined in the text, which is non-zero in the absence of a myosin, and a single myosin increases the signal by 1 unit. et al's [4] conclusion follows from their analysis of the histograms they measured (see Fig. 2 of the main text for the construction of these histograms, and Fig. 5 of the main text for the histograms themselves). We now summarize this analysis.
Each measured histogram contains the fluorescent intensity of every spot in a 500 or 1000 frame movie. Each fluorescent spot comes from a cluster of GFP-tagged myosin molecules (GFP-S1s). Given that 1. each GFP-S1 has an average fluorescent intensity I 1 2. a cluster of i GFP-S1s has a measured intensity of iI 1 ± σ S , where σ S represents signal noise 3. signal noise is Gaussian and its magnitude is independent of the number of GFP-S1s in the cluster then each histogram F (I) arises from the following sum The coefficients, a i , determine the relative frequency of clusters of i molecules in the histogram. Thus, for example, if a histogram consists only of clusters of single GFP-S1s, then a 1 = 1 and a 2 = a 3 = a 4 = · · · = 0. Desai et al [4] determined the coefficients, a i , by first measuring σ N for an isolated GFP and then fitting their histogram with the following equation allowing the a i and I i 's to vary in order to optimize the fit (Fig. 4A). Each time they performed this fit, they found that the optimal I i values occurred at regularly spaced intervals -consistent with Eq. 1, where I i = iI 1 . Thus, one might expect that the lowest I i is I 1 , corresponding to a cluster of one GFP-S1, the next 2I 1 , corresponding to a cluster of two GFP-S1s, and so on. If so, then if the I i 's are sorted from small to large, and plotted as a function of apparent cluster size (i.e. 1 for the first, 2 for the second, and so on), the resulting curve should be linear with slope I 1 and should extrapolate to 0 at a cluster size of 0. Although linear curves were always observed, frequently the line did not extrapolate to 0 (Fig. 4B, black curve). Instead, the curve extrapolated to 0 only upon assuming that the first cluster contained two GFP-S1s, the second three, and so on (Fig. 4B, red curve). Based on this observation, Desai et al [4] conclude that single GFP-S1s cannot bind to the thin filament; rather, two or more GFP-S1s are required to activate the thin filament.
Analysis of simulated data
Given that the model, which assumes a single GFP-S1 can activate the thin filament, successfully reproduces the measurements, one might guess that the measured histograms do not necessarily imply that two or more GFP-S1s are required to activate the thin filament. However, there are subtle differences between the measured and simulated histograms. It is therefore possible that the analysis of Desai et al [4] is sensitive to these subtle differences, so that the simulated data, when analyzed, will differ from the analysis of the measurements. We therefore performed Desai et al's [4] analysis on the simulated measurements with variable [Myo] (Fig. 4). We used a slightly different fitting algorithm than Desai et al [4]. Our algorithm is as follows: 1. Use linear interpolation to represent a given histogram with 1000 equally spaced points between 0 and 600.
2. Determine the intensity (I max ) at which the maximum frequency (F max ) occurs.
3. Use a non-linear optimization algorithm (Matlab's fminsearch function), starting from an initial guess of mean I max and amplitude F max , to fit the histogram with a single Gaussian, with standard deviation σ N , allowing its mean and amplitude to vary to minimize the least squares error between the Gaussian and the interpolated histogram 4. Determine the intensity (I E ) at which the measurement and fit have a maximum difference (F E ). A. In the first step of the analysis, a histogram is fit with Gaussians of fixed standard deviation. Here, a simulated histogram (pCa 6, ATP = 0.1µM, [Myo] = 15nM, 1000 frames collected at 10Hz) is fit with eight Gaussians, each of standard deviation σ = 15. B. In the second step of the analysis, the Gaussians are ordered, numbered sequentially and plotted. In all cases, a linear curve results. Sometimes, that linear curve does not pass through the origin (black), but when the first point is assigned to two molecules, the linear curve passes through the origin (red). Note: the starred point is considered an outlier and is not included in the linear fit. C. In the third step of the analysis, the amplitudes of the Gaussians that correspond to each number of binders are plotted. We wrote our own algorithm for this analysis, which differs slightly from Desai et al's [4] algorithm. We used our algorithm to analyze measured histograms (blue), and simulated histograms (gray, individual, black, average, yellow, range) collected at 10Hz, variable myosin, pCa 6, ATP = 0.1µM, 1000 frames. Analysis of the measurements and simulations with our algorithm yields similar results to those reported in Desai et al. [4] (blue squares). 5. Use a non-linear optimization algorithm (Matlab's fminsearch function), starting from an initial guess of the previous best-fit with one additional Gaussian, with standard deviation σ N , of mean I E and amplitude F E .
6. Repeat steps 4 and 5 until no improvement occurs, and/or two Gaussians with similar means are observed.
To validate this algorithm, we analyzed the measured histograms at variable [Myo], allowing σ N to vary in order to optimize the fit to Desai et al's [4] analysis of these histograms. The results, shown in Fig. 4C with Desai et al's [4] analysis as blue squares and our fit as blue lines, are in reasonable agreement. Further, the best-fit values of σ N = 9, 12, 17 and 15 at [Myo] = 1, 5, 10 and 15 nM, respectively, are in reasonable agreement with our estimates (see Parameter Estimation in Supplementary Material) of σ N = 9.5, 11.25, 13.5 and 16.2 at [Myo] = 1, 5, 10 and 15 nM, respectively.
When we used this algorithm to analyze the simulated data, the results are similar to those from the measurements (Fig. 4C). Further, many of the fits exhibited the same non-zero extrapolation reported in Desai et al [4] (e.g. Fig. 4B is from a simulation at [Myo] = 15 nM). While these results are the basis for the conclusion that two or more myosin molecules are required to activate the thin filament, these simulations assume that a single myosin can activate the thin filament. | 3,248 | 2015-11-01T00:00:00.000 | [
"Biology",
"Physics"
] |
Optimization of Bifunctional Antisense Oligonucleotides for Regulation of Mutually Exclusive Alternative Splicing of PKM Gene
Oligonucleotide tools, as modulators of alternative splicing, have been extensively studied, giving a rise to new therapeutic approaches. In this article, we report detailed research on the optimization of bifunctional antisense oligonucleotides (BASOs), which are targeted towards interactions with hnRNP A1 protein. We performed a binding screening assay, Kd determination, and UV melting experiments to select sequences that can be used as a high potency binding platform for hnRNP A1. Newly designed BASOs were applied to regulate the mutually exclusive alternative splicing of the PKM gene. Our studies demonstrate that at least three repetitions of regulatory sequence are necessary to increase expression of the PKM1 isoform. On the other hand, PKM2 expression can be inhibited by a lower number of regulatory sequences. Importantly, a novel branched type of BASOs was developed, which significantly increased the efficiency of splicing modulation. Herein, we provide new insights into BASOs design and show, for the first time, the possibility to regulate mutually exclusive alternative splicing via BASOs.
Introduction
The process of alternative splicing allows for the synthesis of many protein isoforms from one gene; therefore, a relatively small number of genes can produce a large number of different proteins. It is estimated that about 94% of genes with at least two exons undergo alternative splicing, increasing the diversity of proteins [1]. In addition, the formation of different isoforms is tissue-specific and is associated with various stages of organism development. The regulation of alternative splicing is a complex process and requires special sequences (cis-acting elements), to which the splicing factors (trans-acting elements) bind. These sequences may enhance or silence splicing and can be located in exons or introns. Due to these features, sequences regulating alternative splicing can be divided into exonic splicing enhancers (ESE), intronic splicing enhancers (ISE), exonic splicing silencers (ESS), and intronic splicing silencers (ISS). Moreover, the pre-mRNA secondary structure has also significant influence on alternative splicing regulation [2]. Due to a complicated regulation of the alternative splicing process, even mutations in the noncoding regions may radically change the pattern of alternative splicing. The latest literature reports show increasing number of diseases related to alternative splicing disorders [3][4][5]. New experimental methods have allowed for a broad analysis of the human genome. The results of these studies show large changes in the alternative splicing that occur during tumorigenesis [6]. The alterations in the alternative splicing induce changes in metabolism, apoptosis, control of the cell cycle, invasion, metastasis, and angiogenesis of tumor cells.
In the oligonucleotide-based technology of alternative splicing regulation, we can distinguish approaches grounded on splice-switching oligonucleotides (SSOs) and bifunctional antisense oligonucleotides (BASOs). SSOs are complementary to the sequences as a low activity dimer, leading to the accumulation of metabolic intermediates that are necessary for cells proliferation [18]. Moreover, PKM2 helps to accommodate to nutrientlimited conditions by increasing glucose uptake and lactate production. Additionally, PKM2 functions as a transcription cofactor. It was found that PKM2 can interact with various transcription factors and has an influence on the migration and invasion of cancer cells [19,20]. Knowledge about the correlation of PKM2 functions and cancer development is still under investigation. However, it is already known that PKM2 can be an attractive drug target.
Our studies show, for the first time, the possibility to regulate the alternative splicing of the PKM gene by bifunctional antisense oligonucleotides. In order to achieve this aim, we present the comprehensive optimization of silencing the regulatory part of BASOs. In the presented manuscript, all RNA sequences with which the hnRNP A1 protein potentially interacts have been examined in detail. A broad range of sequences has been used to find the best ones that can be applied as a binding platform for hnRNP A1. We also performed binding affinity studies and UV melting experiments to preliminary select the full-length regulatory sequences that were used for BASOs' construction. Herein, we reveal two regulatory sequences, for which we optimized the number of repeats that is the most effective in alternative splicing regulation. Importantly, the impact of novel BASOs on the endogenous level of the PKM isoforms in the HeLa cell line has been assessed.
Oligonucleotides Screening and Binding Affinity Determination
The hnRNP A1 protein contains two RNA recognition motifs (RRMs), which are responsible for interactions with RNA. Different RNA sequences were proposed as specific for each individual RRM and full-length hnRNP A1 [21][22][23].
Herein, we screened 41 oligonucleotides, which were designed based on already published data, to find sequences that can be used as a binding platform of the highest potency for interactions with the hnRNP A1 protein (Supplementary Material Table S1) [22][23][24]. Almost every sequence contains the core motif (5 AG 3 ), which seems to be necessary for interaction with hnRNP A1 [21,25]. The flanking positions can be occupied by different types of nucleotides. We aimed to assess a wide range of oligonucleotides by introducing all possible nucleotide residues in these variable positions. The screening was performed using an electrophoretic mobility shift assay (EMSA). The protein was used in high excess relative to the oligonucleotide concentration ( Figure 1A). Polyacrylamide gel analysis revealed 19 sequences that interact with hnRNP A1 (Supplementary Material, Table S1). The oligonucleotides were FAM-labeled, which allowed for visualization of the formed complex and the assessment of the oligonucleotide amount involved in complex formation. Based on fluorescence intensity, we chose four oligonucleotides that were bound by the protein to the greatest extent: 5 CAGGUAAGU 3 (sequence A), 5 CAGGUGAGU 3 (sequence B), 5 UAGGA 3 (sequence C), and 5 UAGGU 3 (sequence D). Importantly, all of these sequences contain the common 5 AGG 3 motif, which was suggested by Beusch et al. as an optimal recognition motif for the RRM1 of hnRNP A1 (5 U /C AGG 3 ) [21]. Moreover, sequences of oligonucleotides A and B simulate a consensus 5 splice site. The screening confirmed the already published data, indicating that hnRNP A1 is able to bind the 5 ss sequence [22,26]. Even though all of the selected oligonucleotides formed complexes with hnRNP A1 protein, we decided to perform binding affinity determination to verify if hnRNP A1 has a stronger affinity to some of them. The dissociation constant (Kd) values could help to analyze the dependence between binding affinity and regulatory sequence efficiency in splicing regulation. Usually, a few repetitions of regulatory sequence are used for the protein-recruiting function of bifunctional antisense oligonucleotides. To date, a maximum of three repetitions of the hnRNP A1 binding motif were used to regulate alternative splicing [12]. However, it should be emphasized that the drawback of longer oligonucleotides is their ability to form secondary structures, which can have an influence on binding with the target protein. Therefore, we synthesized oligonucleotides with differ-Molecules 2022, 27, 5682 4 of 16 ent repetitions of regulatory sequences and determined their binding affinity to hnRNP A1 (Table 1). The 9-nt long sequences (A, B) were repeated two (A2, B2) and three times (A3, B3), whereas the 5-nt long sequences (C, D) were repeated two (C2, D2) and four times (C4, D4). Figure 1B). The intensity of the formed complexes and free oligonucleotides was assessed to derive the Kd values. The results obtained from the EMSA experiments are presented in Figure 1C. The length of oligonucleotides in group A does not have an influence on binding affinity, since two-times and three-times repeated sequences possess a comparable Kd value, which is around 1.70 µM. In contrast, the Kd value in group B slightly increases with the number of sequence repetitions, from 1.76 µM for two repetitions to 1.99 µM for three repetitions. Oligonucleotides from group C and D possess the strongest affinity to protein, which increases with the number of repetitions of basic sequence. Kd values change from 1.44 µM to 0.76 µM and from 1.20 µM to 0.77 µM for groups C and D, respectively. There is a slight difference in the Kd value between oligonucleotides with two repetitions of basic sequences (C2 with sequence 5 UAGGA 3 vs. D2 with sequence 5 UAGGU 3 ). On the other hand, oligonucleotides with four repetitions of these sequences possess the same binding affinity (C4 and D4). To assess if the interaction of oligonucleotides with hnRNP A1 is only sequence-dependent or is also structure-dependent, we performed UV melting experiments at 260 nm and 295 nm wavelengths, which allowed us to monitor the formation of Watson-Crick and Hoogsteen H-bond-based structures, respectively. UV melting experiments showed that five out of eight oligonucleotides are able to form nucleic acids structures. Significantly, two types of structures were observed, i.e., duplex/hairpin (sequences B2 and B3) (Supplementary Material Figure S1A) and G-quadruplex (sequences D2, C4, and D4) (Supplementary Material Figure S1B), providing evidence that hnRNP A1 interacts with G-quadruplexes as well as with single-stranded and double-stranded oligonucleotides. Despite the fact that the difference in Kd values is rather minor, the observed correlation between thermodynamic studies and Kd values might suggest that the presence of G-quadruplex increases interactions with protein, in reference to singlestranded oligonucleotides (ssC2 vs. G4-forming D2). Interestingly, the structural features of G-quadruplexes might also influence the interactions with hnRNP A1. The sequences of oligonucleotides C4 and D4, which bind to the protein with the same binding affinity (Kd = 0.76 and 0.77 µM, respectively), most probably prompt the formation of intramolecular G-quadruplex structures. On the contrary, the sequence of D2 allows only for the formation of intermolecular G-quadruplex (Kd = 1.20 µM). Based on the D2, C4, and D4 sequences, all three oligonucleotides form G-quadruplexes containing similar core with two G-tetrads; however, the different molecularity of folding might induce the presence of various 3nt loop types. Thus, the preferential binding of hnRNP A1 to C4 and D4 might be due to the intramolecular character of the G-quadruplex structure and/or different loop type. Our studies stay in accordance with the observations published previously by Liu et al., who reported structure-dependent and sequence-independent interactions between hnRNP A1 and telomere G-quadruplexes and the privileged binding of intramolecular structures [27]. In contrast to groups B, C, and D, UV melting experiments do not show any transition for oligonucleotides from group A. On the other hand, both oligonucleotides from group B are able to form double-stranded structures. All the above results suggest that G-quadruplex structures might be more favorable for interaction with hnRNP A1 than single-stranded and double-stranded RNAs, however, due to minor differences in observed Kd values, further studies are required to investigate this issue in detail.
The strongest interaction with protein was observed for oligonucleotides C4 and D4. Oligonucleotides from group C contain the 5 GGA 3 sequence, which is a well-known splicing enhancer motif [28,29]. Thus, we decided to exclude this group from our cell line experiments, to avoid an undesirable regulation side effect. Therefore, oligonucleotides from group D were chosen to be used as a binding platform in bifunctional antisense oligonucleotides. Additionally, oligonucleotides with one and three repetitions of sequence D were also used in the designing of BASOs, to analyze the regulation-effect dependence on the number of repeated sequences. Moreover, as a contradiction to highly structured group D, we decided also to assess the influence of single-stranded oligonucleotides on the regulatory properties of BASOs. The 5 CAGGUAAGU 3 sequence (oligonucleotide A) corresponds to the mammalian 5 splice site (YAGGURAGU, where Y is pyrimidine and R is purine). In 1994, it was proposed that the 5 splice site sequence could also be a binding site for the hnRNP A1 protein [22]. However, our studies showed that oligonucleotides that are composed of this sequence bind with weaker affinity to hnRNP A1.
Cell Line Results
The model for the regulation of alternative splicing by BASOs was the PKM gene, which contains two mutually exclusive exons, i.e., exon 9 and exon 10 ( Figure 2A). In this research, we used the HeLa cell line, in which the switched expression from PKM1 to PKM2 is documented [30]. Quantitative analysis using qPCR confirmed that the level of PKM2 in HeLa cells is around 96% and the level of the PKM1 isoform is 4% (data not shown). Hitherto, research on the regulation of the alternative splicing of the PKM gene was focused on the splice-switching oligonucleotides (SSOs) that block regulatory sequences, to prevent their interactions with splicing factors [31]. In contrast, our attempts were focused on the BASO-mediated suppression of exon 10 splicing. The introduction of BASO with a silencing sequence should result in arrested or reduced recognition of splicing site at exon 10; thereby, exon 10 should be removed from the transcript. We designed a series of BASOs, containing oligonucleotide D4 as a regulatory part, which hybridize to various fragments of intron 9 and exon 10 to silence the splicing of exon 10 (Supplementary Material, Table S2). The transfection was made with lipofectamine as a transfection reagent. The cells were treated with 125 nM and 250 nM BASOs for 48 h. The amount of PKM1 and PKM2 isoforms was determined by qPCR. As a control, we used antisense oligonucleotides (ASOs) that hybridize to the same pre-mRNA fragments as BASOs. Additionally, the results were compared with non-transfected HeLa cell line. The initial stage of BASO optimization involved screening of the hybridization positions within the exon 10 and intron 9 of the PKM gene. Earlier studies on the regulation of alternative splicing of the PKM gene have shown that the 3 ss is essential for exon 10 definition [32]. Therefore, the studies were focused on silencing the 3 ss. Two oligonucleotides were designed to be complementary to the intron 9 in proximity to the 5 end of exon 10 ( Figure 2B). The next three oligonucleotides hybridize to the sequence in exon 10 ( Figure 2B). For each BASO, the antisense part (ASO1, ASO2, ASO3, ASO4, and ASO5) was synthesized separately, to confirm whether a regulatory effect is observed due to oligonucleotide hybridization or due to the activity of the splicing factors recruited by the regulatory sequence. The effectiveness of BASOs was assessed by calculating the percentage PKM2/PKM1 ratio. The BASO that was the most effective (D4-BASO4) hybridized at position +26 to position +42 of exon 10 (see Supplementary Material Table S2 and Figure S2 for BASO sequences and quantitative results), and, for this molecule, we optimized the regulatory sequences ( Figure 2C).
In already published studies, the most commonly used chemistry of antisense sequence [10,16,33]. However, the regulatory sequence was used in an RNA, phosphorothioate-containing RNA (PS-RNA), DNA, or 2 -O-Me-RNA series [9,11,12,15,16]. In our research, we used the mixed chemical composition of the BASO molecule. The antisense part of each oligonucleotide was composed of the 2 -O-Me-RNA residues, whereas the regulatory part with a different length and different sequences was composed of RNA residues. Moreover, the regulatory sequence was located at the 3 end of the BASO molecule. Our main aim was to assess the influence of a number of regulatory sequences on the modulating properties of BASOs. For this purpose, four oligonucleotides were synthesized with sequence D as a basis of the regulatory part: once repeated (D1-BASO), twice repeated (D2-BASO), three-times repeated (D3-BASO) and four-times repeated (D4-BASO) ( Figure 2C).
5 UAGGU 3 Regulatory Sequence
In 1994, Burd and Dreyfuss determined, in SELEX experiments, the optimal hnRNP A1 protein binding sequence as 5 UAGGGA/U 3 [22]. Based on this knowledge, in the BASO experiments where the hnRNP A1 protein was treated as a target effector protein, only these two sequences (5 UAGGGA 3 and 5 UAGGGU 3 ) were used as a recruiting platform [12,15]. In our studies, 5 UAGGU 3 RNA has been shown to form a stable complex with hnRNP A1. Beutsch et al. solved the NMR structure for 5 UUAGGUC 3 RNA complexed with RRM1 of hnRNP A1. They showed that five nucleotides (U1 to G5) from this sequence interact with the RRM1 domain [21]. These five nucleotides almost completely overlap with oligonucleotide D (5 UAGGU 3 ), which supports our idea to use this sequence in the regulatory part of BASO. To assess the effectiveness of BASOs, we defined two parameters indicating the change of the alternative splicing of the PKM gene. The ratio of the PKM2/PKM1 isoforms was calculated based on the percentage level of isoforms in the pool of both isoforms. Additionally, the normalized expression of PKM1 and PKM2 was analyzed.
Each four repeats (D4-BASO), at a concentration of 125 nM, the value of the PKM2/PKM1 ratio is very similar, i.e., 17.1 and 16.8, respectively. However, at a higher concentration (250 nM), a difference between these molecules can be observed, indicating that D3-BASO is less active and changes the isoform ratio by 7.6, whereas D4-BASO decreases the PKM2/PKM1 isoform ratio by 11.1 ( Figure 3A).
The results of the normalized expression of the PKM1 and PKM2 isoforms show a similar trend of changes in the alternative splicing of the PKM gene ( Figure 3B). The shortest oligonucleotide (D1-BASO) in 125 nM concentration does not show any significant effect on expression level of both PKM isoforms. However, a two times higher concentration of D1-BASO decreases the expression of both isoforms. Importantly, the transfection of this oligonucleotide influences the expression level of both PKM isoforms, which explains the decrease in the PKM2/PKM1 ratio. A reduced PKM2/PKM1 ratio could suggest the shift of alternative splicing for higher production of PKM1 and a decrease in PKM2. However, the results of the normalized expression indicate that the change in the ratio is caused by the decrease in both isoforms, in particular by the more significant reduction in PKM2 level than in PKM1 expression. Interestingly, the level of PKM1 starts to increase proportionally to the length of the regulatory sequence. D2-BASO increased the PKM1 level by 21.0-36.7%, in reference to the isoform level within non-treated cells. The effect was similar for both BASO concentrations, suggesting that the maximum effective concentration of BASO was reached. The most effective D3-BASO is composed of three regulatory sequences. The 125 nM oligonucleotide concentration elevated PKM1 level by about 81.4%, whereas the 250 nM D3-BASO concentration caused a significant increase in PKM1 expression, which is about 240% in reference to the non-transfected HeLa cells. Interestingly, four repetitions of the 5 UAGGU 3 sequence within D4-BASO at both concentrations have strong influence on PKM2 expression inhibition, with the simultaneous retention of the PKM1 isoform level. Depending on what is expected, it can be assumed that the most effective BASOs are these with three or four repetitions of regulatory sequence. D3-BASO increased the PKM1 level the most significantly, however, with no influence on PKM2 expression. On the other hand, D4-BASO decreased the expression of PKM2 the most markedly, with no significant elevation of PKM1 isoform level.
four repetitions of the 5′ UAGGU 3′ sequence within D4-BASO at both concentrations have strong influence on PKM2 expression inhibition, with the simultaneous retention of the PKM1 isoform level. Depending on what is expected, it can be assumed that the most effective BASOs are these with three or four repetitions of regulatory sequence. D3-BASO increased the PKM1 level the most significantly, however, with no influence on PKM2 expression. On the other hand, D4-BASO decreased the expression of PKM2 the most markedly, with no significant elevation of PKM1 isoform level.
5′ CAGGUAAGU 3′ Regulatory Sequence
Due to the 9-nucleotide length of the 5′ CAGGUAAGU 3′ sequence, we repeated it maximum three times. The analysis of the PKM2/PKM1 isoform ratio revealed that, in each case, the introduction of BASO causes a change of this parameter ( Figure 3C). A1-BASO is the least effective molecule in a series of A-BASOs in both concentrations. For the 125 nM concentration, the PKM2/PKM1 decreased to 20.1 and for 250 nM it decreased to 20.4. An additional regulatory sequence seems to enhance effectiveness of BASO and reduce the PKM2/PKM1 value to 16.6 only for 125 nM, whereas the isoform ratio was higher by about 1.6 for the 250 nM concentration, in comparison to A1-BASO. The highest potency in the modulation of the PKM2/PKM1 ratio is observed for three repetitions of the 5′ CAGGUAAGU 3′ sequence at higher oligonucleotide concentration. Although the 125
5 CAGGUAAGU 3 Regulatory Sequence
Due to the 9-nucleotide length of the 5 CAGGUAAGU 3 sequence, we repeated it maximum three times. The analysis of the PKM2/PKM1 isoform ratio revealed that, in each case, the introduction of BASO causes a change of this parameter ( Figure 3C). A1-BASO is the least effective molecule in a series of A-BASOs in both concentrations. For the 125 nM concentration, the PKM2/PKM1 decreased to 20.1 and for 250 nM it decreased to 20.4. An additional regulatory sequence seems to enhance effectiveness of BASO and reduce the PKM2/PKM1 value to 16.6 only for 125 nM, whereas the isoform ratio was higher by about 1.6 for the 250 nM concentration, in comparison to A1-BASO. The highest potency in the modulation of the PKM2/PKM1 ratio is observed for three repetitions of the 5 CAGGUAAGU 3 sequence at higher oligonucleotide concentration. Although the 125 nM A3-BASO decreased the PKM2/PKM1 level to 17.5, the increased concentration resulted in the most significant change of the ratio, which was 11.3.
The results of the normalized expression of the PKM isoforms indicate that the 5 CAGGUAAGU 3 sequence is less effective than the 5 UAGGU 3 sequence. One and two repetitions of regulatory sequence are not able to increase the expression level of the PKM1 isoform, which is even lower than in non-transfected cells. On the contrary, the PKM2 isoform level is reduced more significantly than that of PKM1. The most considerable results were obtained for the three repetitions of 5 CAGGUAAGU 3 . The A3-BASO at a 125 nM concentration increased the PKM1 level by 55%, in reference to non-transfected cells. A similar change of PKM1 expression was achieved for a 250 nM oligonucleotide concentration. However, 250 nM A3-BASO also has a significant influence on the expression of PKM2. The A3-BASO at a 125 nM concentration does not show any effect on the expression level of PKM2, whereas the transfection with a 250 nM oligonucleotide reduced the isoform level by about 33% (Figure 3D). In general, there is a possibility that A series of oligonucleotides might be bound by U1 snRNP. Gendron and coworkers used another version of 5 ss (5 GUUGGUAUGA 3 ) and suggested that the effectiveness of this sequence is related to interactions with U1 snRNP [15]. Indeed, interaction of this regulatory part with other proteins than hnRNP A1 might be a reason for the slightly weaker regulatory properties of A-BASOs. Additionally, there is also the possibility that more than one splicing protein interacts with this sequence, depending on the number of binding motifs in the regulatory part. Nevertheless, we have provided further evidence that a sequence of 5 ss can be used in the regulatory part of BASO and can inhibit the proximate 3 ss use.
Branched BASOs
We also decided to design a BASO molecule that is branched with two chains of the regulatory sequence ( Figure 2D). Previously, similar splicing regulatory constructs have been used in vitro [15]. Herein, we present the first studies of branched BASO efficiency in cell line experiments. The 2XD4-BASO includes two chains with the D4 regulatory sequence, resulting in a total of eight repeats of the 5 UAGGU 3 sequence within one BASO molecule. The obtained results clearly show that such branched molecules work very effectively. The ratio of the PKM2/PKM1 isoforms decreased to the greatest extent among all the tested BASOs. The presence of 2XD4-BASO reduced the isoforms ratio from 25.5 to 7.0 and 7.5 at 125 nM and 250 nM concentrations, respectively ( Figure 4A). Notably, the significant changes in quantity of the PKM1 and PKM2 isoforms can also be noted based on their normalized expression analysis. At a 125 nM concentration of 2XD4-BASO, the level of the PKM2 isoform decreased to 47%, and the amount of PKM1 increased to 175%. The effect was even more significant at a 250 nM concentration of 2XD4-BASO, indicating a decrease in PKM2 expression to 55% and an increase in PKM1 expression to 231%, in reference to non-transfected cells. (Figure 4B).
A similar construct was used to double the number of regulatory binding motifs of A3-BASO ( Figure 2D). Surprisingly, 2XA3-BASO did not decrease the PKM2/PKM1 ratio, Notably, the significant changes in quantity of the PKM1 and PKM2 isoforms can also be noted based on their normalized expression analysis. At a 125 nM concentration of 2XD4-BASO, the level of the PKM2 isoform decreased to 47%, and the amount of PKM1 increased to 175%. The effect was even more significant at a 250 nM concentration of 2XD4-BASO, indicating a decrease in PKM2 expression to 55% and an increase in PKM1 expression to 231%, in reference to non-transfected cells. (Figure 4B).
A similar construct was used to double the number of regulatory binding motifs of A3-BASO ( Figure 2D). Surprisingly, 2XA3-BASO did not decrease the PKM2/PKM1 ratio, compared to A3-BASO. In reference to non-transfected cells, the ratio was reduced about 7.4 and 7.3 for 125 nM and 250 nM concentrations, respectively ( Figure 4C). On the other hand, the normalized expression showed satisfying results. The PKM1 isoform level for both oligonucleotide concentrations was almost three times higher than in non-transfected cells. It is the best obtained result concerning the enhancement of the PKM1 level. However, in contrast to A3-BASO, none of the 2XA3-BASO concentrations reduced the PKM2 expression level. The PKM2 expression even increased to 124% and 112% for 125 nM and 250 nM concentrations, respectively ( Figure 4D). The most visible difference in the effectiveness of both designed branched BASOs is their impact on the PKM2 isoform expression. 2XD4-BASO is more effective in comparison to D4-BASO, whereas 2XA3-BASO is less potent than its linear, three-times-repeated version (A3-BASO) in reducing the PKM2 level. This can support our suggestion that A series BASOs might act through the interaction with proteins other than hnRNP A1. However, it should also be considered that 2XA3-BASO contains three binding motifs, whereas there are four regulatory sequences in 2XD4-BASO. Further studies that could indicate the proteins that interact with A3-BASOs would help to understand the regulatory mechanism of these molecules.
Branched nucleic acids (bNA) have been originally designed to investigate the recognition factors of RNA branch point sequences in cells [34][35][36]. These oligonucleotides simulate the lariat, which is formed during the splicing reaction when the adenosine at the branch site triggers a nucleophilic attack on the 5 ss. This kind of molecules use both 2 and 3 hydroxyl groups of adenosine to form vicinal 2 -5 and 3 -5 phosphodiester bonds with another nucleotides. The branched oligonucleotides were proven to have splicing inhibitory properties, despite not interacting directly with pre-mRNA [34]. This strong similarity to the naturally occurring lariat causes that bNA probably recruits and sequesters branch recognition splicing factors, leading to the inhibition of the splicing reaction. Gendron et al. used such an approach to regulate alternative splicing via branched bifunctional antisense oligonucleotides [15]. Authors proved that using bNA with two regulatory sequences, which are not effective in linear BASOs, provides splicing silencing function. This was strong evidence that the presence of oligonucleotides with a branched adenosine can be a target for splicing factors. In our studies, we designed another type of chemical architecture to obtain a branched oligonucleotide; i.e., we applied a glycerol linker that allowed us to span the antisense part with two regulatory sequences. Therefore, from the chemical point of view, these molecules are completely different and should not be recognized by branch recognition splicing factors. In consequence, the enhanced efficiency of branched BASOs is rather due to the increased number of hnRNP A1 molecules that interact with the regulatory sequences.
Oligonucleotide Synthesis
All oligonucleotides were synthetized on MerMade12 synthesizerBioAutomations, LGC Biosearch Technologies, Kenning Ct Plano, USA) using β-cyanoethyl phosphoramidite chemistry and commercially available nucleoside phosphoramidites (GenePharma Co., Ltd, Suzhou, China). Oligonucleotides which were used in electrophoretic mobility shift assay were additionally 5 -labeled with 6-carboxyfluorescein (6-FAM) (ChemGenes, Wilmington, MA, USA). RNA oligonucleotides were treated with 30% ammonia/ethanol solution (Avantor Performance Materials Poland S.A., Gliwice, Poland) (3:1 v/v) and incubated at 55 • C for 18 h. The oligonucleotide solutions were then separated from the solid support and evaporated. Next, oligonucleotides were incubated with triethylamine trihydrofluoride, in the presence of dimethylformamide (Avantor Performance Materials Poland S.A., Gliwice, Poland) as a solvent, at 55 • C for 2-3 h. Oligonucleotides were precipitated in butanol, followed by sephadex column desalting. Oligonucleotides were purified via 12% polyacrylamide gel electrophoresis in denaturing conditions. The composition of all oligonucleotides was confirmed by MALDI-TOF mass spectrometry. Concentrations of oligonucleotide stock solutions were determined by UV measurements at λ = 260 nm.
hnRNP A1 Protein Production
Recombinant hnRNP A1 protein (2-196 aa) was produced from plasmid received as a generous gift from Frédéric Allain, at the Institute of Biochemistry, ETH Zurich, Switzerland [21]. hnRNP A1 was fused to N-terminal tag with 6 histidines and TEVprotease cleavage site. Protein was overexpressed in BL21(DE3) codon-plus (RIL)competent cells (
Electrophoretic Mobility Shift Assay (EMSA)-Oligonucleotides Screening
Each of 41 6-FAM labeled oligonucleotides was used in constant concentration of 0.5 µM and incubated with 60 µM concentration of hnRNP A1 protein. The reactions were prepared in hnRNPA1 dialysis buffer. The oligonucleotide/protein mixtures were incubated for 30 min at 4 • C. After incubation, the mixtures were centrifuged for 10 min at 14,000 rpm at 4 • C and loaded on native 4.5% polyacrylamide gel (bisacrylamide/acrylamide 37.5:1). Electrophoresis was run for 2 h, at 300 V at 4 • C. The gel was then screened in Fuji FLA-500. The intensity of complex and free RNA bands was determined by MultiGauge (FujiFilm, Tokyo, Japan).
Electrophoretic Mobility Shift Assay (EMSA)-Kd Determination
The 14 dilutions of protein were prepared in the 0.1 µM to 60 µM range. Then, 6-FAM-labeled oligonucleotides were used in constant concentration of 0.5 µM. The oligonucleotide/protein mixtures were prepared in 10 µL hnRNPA1 dialysis buffer (150 mM KCl, 50 mM L-Arg, 50 mM L-Glu, 1.5 mM MgCl 2 , 0.2 mM EDTA, 0.05% BME, 20 mM Na 2 HPO 4 , and pH 7.0) and incubated for 30 min at 4 • C. After incubation, the mixtures were centrifuged for 10 min at 14,000 rpm at 4 • C and loaded on native 4.5% polyacrylamide gel (bisacrylamide/acrylamide 37.5:1). Electrophoresis was run for 2 h at 300 V at 4 • C. The gel was then screened by Fuji FLA-500 (FUJIFILM, Tokyo, Japan). The intensity of complex and free RNA bands was determined by MultiGauge. The Kd value was calculated in GraphPad Prism 8.0, and the specific binding with Hill slope function was used.
Cell Culture and Oligonucleotides Transfection
HeLa cell line was cultured in RPMI 1640 medium (Gibco, Waltham, MA, USA), supplemented with vitamins, antibiotics, and 10% FBS (Gibco). Cells were seeded in 24-well plates at a density of 1 × 10 5 cells/well, which gave 95% confluence the next day. The cells were incubated at 37 • C with 5% CO 2 and 95% humidity. After 24 h, the medium was exchanged to antibiotic-free medium. Oligonucleotides were dissolved in OPTI-MEM (Gibco) medium with Lipofectamine 3000 (Invitrogen, Waltham, MA, USA) as a transfection reagent (1 µL of Lipofectamine and 1 µL of P3000 reagent per one reaction) and added to cells in final concentrations of 125 nM and 250 nM. Cells were harvested 48 h after transfection. All transfections were performed at least in biological triplicate. The results for each BASO were averaged.
RT-qPCR Analysis
The RNA from the cultured cells was isolated using acid guanidinium thiocyanatephenol-chloroform extraction and the RNA was treated with DNase I. The quality of the isolated RNA was verified by evaluation A260/A280 and A260/A230 factors. An earlier prepared.
200 ng RNA template was used for cDNA synthesis, using the LunaScript RT SuperMix Kit (NEB). qPCR was performed on a CFX96 real-time PCR system (Bio-Rad) using Luna Universal qPCR Master Mix (NEB, Ipswich, MA, USA) and 96-well clear plates. Two pairs of target gene primers were designed to quantify the amount of PKM1 and PKM2 isoforms. Primers for PKM2 isoform amplification: 5 ATTGCCCGTGAGGCAGAGG 3 and 5 TGCCAGACTTGGTGAGGACGATTA 3 . Primers for PKM1 isoform amplification: 5 GTTCCACCGCAAGCTGTTTGAAGA 3 and 5 TGCCAGACTCCGTCAGAACTATCA 3 . The expression of isoforms was normalized against β-actin gene (reference gene primers: 5 GCCAGCAGCCTCTGATCTG 3 and 5 CTGGTTCTTGCCAGCCTCTAG 3 ). The Ct values of human β-actin gene were in the range of 17-19. The qPCR cycles were as follows: 95 • C, 1 min for predenaturation step: (95 • C, 15 s and 60 • C, 30 s) for 34 cycles.
qPCR Statistical Analysis
The results from replicates of particular samples were gathered to determine the mean normalized expression and its standard error of the mean (Bio-Rad CFX Manager 3.0, Hercules, CA, USA). The normalized relative expression of PKM1 and PKM2 from biological replicates for BASOs and control was compared at a significance level of 0.05 or 0.01, using Bio-rad CFX Manager 3.0 and GraphPad t-test Calculator. At least three biological and two technical repetitions were performed. The percentage level of both isoforms was used to calculate the PKM2/PKM1 ratio, which was obtained from the results of normalized expression.
Conclusions
The presented studies proved, for the first time, the possibility to regulate alternative splicing of the PKM gene by using bifunctional antisense oligonucleotides in the HeLa cell line. It is also the first attempt to manipulate mutually exclusive alternative splicing with these molecules. A screening assay allowed us to choose sequences that can be used in the regulatory part of BASOs. EMSA and melting experiments proved that hnRNP A1 interacts with structured oligonucleotides, and secondary structures have an impact on the binding affinity. Based on our cell line studies, it can be assumed that the effectiveness of the regulatory part of BASOs is dependent on the type of binding motif. Moreover, we noticed differences in the effectiveness of the PKM1 and PKM2 level modulation, for both used series of BASOs. Therefore, efforts must be made to carefully choose the regulatory sequence. Furthermore, the number of regulatory binding motifs has an influence on the regulatory properties of BASOs. Both studied sequences were the most effective in increasing the PKM1 level, when three repetitions were used in the regulatory part. However, the elevated PKM1 level was not always simultaneous with reduced PKM2 level. Additionally, the doubling of both regulatory parts in branched BASOs, to the greater extent, increased the production of PKM1. On the other hand, only 2XD4-BASO significantly influenced the PKM2 level. Therefore, experiments on the optimization and effectiveness of these molecules are pivotal to be able to design simple and potent therapies based on BASOs.
The analysis of the splicing regulatory potential of BASOs containing different sequences and repetitions of regulatory sequences, as presented herein, might facilitate the design and development of novel, efficient therapeutic tools. To date, a few oligonucleotide drugs based on SSO molecules have already been approved. However, the designing of SSOs requires very careful and detailed investigations on the regulatory sequences, which are pivotal for the splicing regulation of the target pre-mRNA. BASO molecules are potentially more optimal tools in regulating the splicing in all diseases and disorders related to splicing, in comparison to SSOs, due to the less stringent requirements for the site of the hybridization with pre-mRNA. In addition, the effective optimization of the structure of BASOs can allow for using the same regulatory part to regulate the alternative splicing of different genes.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/molecules27175682/s1. Table S1: Screening results of binding motifs for hnRNP A1; Table S2: The list of D4-BASO sequences used to optimize position of hybridization with PKM gene; Figure S1: The graphs present the melting curves of regulatory oligonucleotides; Figure S2: The ratio of PKM2/PKM1 isoforms after cells treatment with BASOs and ASOs that hybridize at different positions of PKM pre-mRNA. | 8,355.2 | 2022-09-01T00:00:00.000 | [
"Biology"
] |
Finite Element Analysis of Custom Shoulder Implants Provides Accurate Prediction of Initial Stability
Custom reverse shoulder implants represent a valuable solution for patients with large bone defects. Since each implant has unique patient-specific features, finite element (FE) analysis has the potential to guide the design process by virtually comparing the stability of multiple configurations without the need of a mechanical test. The aim of this study was to develop an automated virtual bench test to evaluate the initial stability of custom shoulder implants during the design phase, by simulating a fixation experiment as defined by ASTM F2028-14. Three-dimensional (3D) FE models were generated to simulate the stability test and the predictions were compared to experimental measurements. Good agreement was found between the baseplate displacement measured experimentally and determined from the FE analysis (Spearman’s rank test, p < 0.05, correlation coefficient ρs = 0.81). Interface micromotion analysis predicted good initial fixation (micromotion <150 µm, commonly used as bone ingrowth threshold). In conclusion, the finite element model presented in this study was able to replicate the mechanical condition of a standard test for a custom shoulder implants.
Introduction
Since its introduction in the late 1980s, reverse shoulder arthroplasty (RSA) has become a standard treatment for patients with rotator cuff arthropathy.More recently, surgeons have expanded its application to fracture care, rheumatoid arthritis, and even failed prior surgery replacements, further increasing the number of surgeries [1,2].In many cases, the presence of considerable bone loss at the glenoid side, due to degenerative arthritis or secondary to revision surgeries, may complicate baseplate implantation.This limits the treatment options and jeopardizes the clinical outcomes, as insufficient bone stock can lead to suboptimal component fixation and therefore early implant failure.
Different methods have been described to address glenoid defects, depending on the bone loss severity [3].Eccentric reaming can be performed in case of moderate bone loss, while bone grafting is more suitable for large defects.However, the results of bone grafting are controversial since not all studies have reported satisfactory outcomes [4].More recently, custom implants have been introduced as an alternative treatment.Together with patient-specific preoperative planning and implant design, custom implants allow for proper joint positioning and fixation of the component in the remaining native bone [5,6].
In order to avoid aseptic loosening of the glenoid component, a stable bone-implant interface is necessary, in which only small relative movements are allowed.Fixation screws are used to provide initial mechanical stability (primary fixation) which subsequently can lead to biological fixation by bone ingrowth (secondary fixation).To enable bone ingrowth, custom implants have a porous titanium structure (spray-coated or 3D printed) [7,8].However, micro-motion at the bone-implant interface above 150 µm has been shown to inhibit this mechanism and lead to an unstable fibrous tissue layer between the metallic porous layer and the host bone [9].Therefore, implant design should be optimized to minimize micromotion at the time of initial fixation, thus leading to a stable bone-implant interface and to a better osseointegration.
For patient-specific shoulder implants, the enormous design space, which allows the glenoid component to be adapted to the patient anatomy, represents a challenge to the evaluation of the mechanical stability.While mechanical tests can be performed extensively to assess the stability of standard implants [10][11][12], for custom implants with a unique design for each patient, it is not practical to use mechanical testing to verify the stability.Alternatively, Finite Element (FE) analysis has been widely used to evaluate the influence of different implant configurations on the initial fixation of an implant [13][14][15][16][17][18][19].
Chae et al. analyzed the bone-implant interface micromotion of an inferiorly tilted glenoid component virtually implanted in a scapula model and found that the tilted fixation compromised initial mechanical stability [17].Suarez et al. investigated how a different type and number of screws impacted the initial stability of a cementless glenoid component, reporting higher interface micromotions when the same implant was tested in poor quality bone [14], even when more physiological loads (e.g., from musculoskeletal model) were applied [18].Elwell et al. [19] reported similar results, showing that the use of only two fixation screws could amplify the negative effect of baseplate lateralization, thus jeopardizing implant stability and worsening its functional outcome.Hopkins et al. examined multiple standard designs with different screw angle inclination, concluding that increasing the screw inclination enhanced stability more than using longer and thicker screws [15].Other studies explored instead the effect of the prosthesis repositioning (using different glenosphere sizes or bone grafting) and found that a lateralization of 10 mm was mechanically acceptable for osseointegration [13,16].
However, the effect of different loading directions, which in case of a custom implant cannot be neglected due to the asymmetry of the design shape, was never systematically investigated.It is evident that, since the main parameters (number and type of screws, baseplate dimensions, etc.) are unique for each custom implant, FE analysis has the potential to guide the design process by virtually comparing multiple designs without the need of a mechanical test.
Therefore, the aim of this study is to develop an automated workflow to evaluate the initial stability of custom shoulder implants during the design phase, by simulating a fixation experiment based on ASTM F2028-14 [20].To our knowledge, this is the first study to automate, evaluate and validate a full in silico modeling of the ASTM F2028-14 for a custom-made prosthesis.Moreover, the FE model can be used to predict the relative motion at the bone-implant interface, which cannot be quantified by the current mechanical tests.
Materials and Methods
A custom reverse shoulder implant was designed and 3D printed to comply with ASTM standards [20].To evaluate the preclinical stability of the implant, displacement of the glenoid baseplate was measured in response to axial and shear loading, after insertion in a bone substitute.The experimental baseplate displacement was compared to the model estimation to validate the virtual bench test.A more detailed explanation regarding the mechanical test and the in silico model is presented in the following sections.
Experimental Set-Up
The ASTM F2028-14 [20] is a standard method commonly used for assessing the risk of glenoid loosening in shoulder implants.The test protocol includes three subsequent steps: (1) an initial static analysis to measure the baseplate displacement, (2) a fatigue phase in which the implant is cyclically rotated around an axis loaded with a compressive axial force, and (3) an additional static phase to measure the glenoid fixation, similarly to step 1.
The custom implant was inserted into a 20 pcf (pounds per cubic foot) polyurethane block (Sawbones Europe AB, Sweden), which is normally used as substitute of glenoid bone in mechanical tests [21].Two locking and two nonlocking (compression) screws were used to fix the implant to the artificial bone (Figure 1a).Compression screws are able to close the gap at the bone-implant interface, by pressing the metal component towards the bone.For this reason, nonlocking screws were inserted first, followed by the locking screws, which instead lock the implant in place thanks to the threaded head mating the threaded holes of the implant.
analysis to measure the baseplate displacement, (2) a fatigue phase in which the implant is cyclically rotated around an axis loaded with a compressive axial force, and (3) an additional static phase to measure the glenoid fixation, similarly to step 1.
The custom implant was inserted into a 20 pcf (pounds per cubic foot) polyurethane block (Sawbones Europe AB, Sweden), which is normally used as substitute of glenoid bone in mechanical tests [21].Two locking and two nonlocking (compression) screws were used to fix the implant to the artificial bone (Figure 1a).Compression screws are able to close the gap at the bone-implant interface, by pressing the metal component towards the bone.For this reason, nonlocking screws were inserted first, followed by the locking screws, which instead lock the implant in place thanks to the threaded head mating the threaded holes of the implant.
An axial compressive load of 430 N was applied perpendicular to the glenoid plane by a flat polyacetal load applicator.An additional shear load of 350 N was applied parallel to the baseplate via a horizontal loading fixture (Figure 1b).Shear and axial forces were defined in a worst-case loading scenario, being respectively 42% and 51% of body weight (assumed to be 86 kg) [20].
Contrary to standard baseplates, which normally have a symmetric round shape, custom implants can present an asymmetric design, consequently the shear load was applied along the four main directions of the implant: anterior, posterior, superior and inferior (Figure 1a).Dial indicators (MTS System, USA) were placed to measure the displacement of the baseplate.For each loading direction, both axial and shear baseplate displacements were measured, resulting in a total of eight measurements.Each measurement was performed three times and averaged value was obtained.The test was repeated for six identical samples under the same conditions.
Generation of Finite Element Models
An automated workflow was developed to set-up FE simulations of a virtual bench test.To obtain a virtual bench test that can be run multiple times by the design engineers to support possible design decisions and adaptations, the computational time of the simulation needs to be limited.For this reason, the finite element model was created to simulate only the static step of the experimental test, without considering the fatigue aspect, similarly to the work of Virani et al. [13].An axial compressive load of 430 N was applied perpendicular to the glenoid plane by a flat polyacetal load applicator.An additional shear load of 350 N was applied parallel to the baseplate via a horizontal loading fixture (Figure 1b).Shear and axial forces were defined in a worst-case loading scenario, being respectively 42% and 51% of body weight (assumed to be 86 kg) [20].
Contrary to standard baseplates, which normally have a symmetric round shape, custom implants can present an asymmetric design, consequently the shear load was applied along the four main directions of the implant: anterior, posterior, superior and inferior (Figure 1a).Dial indicators (MTS System, USA) were placed to measure the displacement of the baseplate.For each loading direction, both axial and shear baseplate displacements were measured, resulting in a total of eight measurements.Each measurement was performed three times and averaged value was obtained.The test was repeated for six identical samples under the same conditions.
Generation of Finite Element Models
An automated workflow was developed to set-up FE simulations of a virtual bench test.To obtain a virtual bench test that can be run multiple times by the design engineers to support possible design decisions and adaptations, the computational time of the simulation needs to be limited.For this reason, the finite element model was created to simulate only the static step of the experimental test, without considering the fatigue aspect, similarly to the work of Virani et al. [13].
Bone and Implant Models
The geometry files (STL) of the implant were imported into the design software 3-matic (v 14.0, Materialise N.V., Leuven, Belgium), that includes a Python scripting interface to automate processes (Figure 2).The bone substitute, which had to match the nonflat contact surface at the interface with the implant, and the loading box were created through a series of Boolean operations.
meshing process of the screws is described in Section 2.2.3.
All components were modeled with linear elastic material properties, which is an assumption commonly made under these experimental conditions [22].The loading box and baseplate were assigned with a Young's modulus of 110,000 MPa and a Poisson's ratio, ν, of 0.3 (corresponding to Titanium Ti-6Al-4V, [23]).The porous structure of the baseplate, mainly consisting of 3D printed Titanium, was modelled as a solid part and characterized by a lower stiffness.A Young's modulus equal to 2000 MPa and a Poisson's ratio of 0.3 were used, consistently with the values reported in the literature for titanium porous scaffolds [24].The glenosphere was modeled using cobalt-chromiummolybdenum material properties (E = 220,000 MPa, ν = 0.3, [25]).The material properties of the foam block, representative of human glenoid trabecular bone, were taken as reference for the bone substitute (E = 200 MPa, ν = 0.3, [26]).
Contact surfaces were tied or were modelled as a hard contact with friction, depending on the interaction of the component.The interface between glenosphere and baseplate, and loading box and bone block, were considered completely tied, with no relative motion.Coulomb friction contact was implemented at the bone-implant interface.In the literature, values ranging from 0.5 to 0.7 are reported for the friction coefficient between bone and porous metal [13,14,22,27], thus an average friction coefficient of 0.6 was selected for the presented model.The 3D FE models were meshed with tetrahedral C3D4 elements.For the loading box, a coarse mesh was used, with element edge lengths ranging from 2 to 4 mm.The bone block was meshed with nonuniform elements, using a more fine mesh at the interface.A mesh convergence study was performed upfront by evaluating the impact of different mesh size on the interface micromotion.Ultimately, an average element edge length of 0.5 mm at the baseplate-bone interface was considered as the converged mesh.Nonmanifold nodes were created at the bone-implant interface, to facilitate the micromotion calculation and the convergence of the contact analysis.Due to this operation the elements nodes in the contact surface were shared between implant and bone.The implant was meshed with an average edge length of 0.5, for a total of approximately 630,000 elements, consistent with the dimensions of the prosthetic components and necessary to capture the complexity of the custom design.Ultimately, the glenosphere was meshed with an average element size of 0.5 mm.The meshing process of the screws is described in Section 2.2.3.
All components were modeled with linear elastic material properties, which is an assumption commonly made under these experimental conditions [22].The loading box and baseplate were assigned with a Young's modulus of 110,000 MPa and a Poisson's ratio, ν, of 0.3 (corresponding to Titanium Ti-6Al-4V, [23]).The porous structure of the baseplate, mainly consisting of 3D printed Titanium, was modelled as a solid part and characterized by a lower stiffness.A Young's modulus equal to 2000 MPa and a Poisson's ratio of 0.3 were used, consistently with the values reported in the literature for titanium porous scaffolds [24].The glenosphere was modeled using cobalt-chromium-molybdenum material properties (E = 220,000 MPa, ν = 0.3, [25]).The material properties of the foam block, representative of human glenoid trabecular bone, were taken as reference for the bone substitute (E = 200 MPa, ν = 0.3, [26]).
Contact surfaces were tied or were modelled as a hard contact with friction, depending on the interaction of the component.The interface between glenosphere and baseplate, and loading box and bone block, were considered completely tied, with no relative motion.Coulomb friction contact was implemented at the bone-implant interface.In the literature, values ranging from 0.5 to 0.7 are reported for the friction coefficient between bone and porous metal [13,14,22,27], thus an average friction coefficient of 0.6 was selected for the presented model.
Screw Model
In order to assess the impact of different screw types (compression and locking) on fixation, particular attention was paid to the screw modeling.A recent study showed that an excessive simplification of the screw shaft model has an impact on the micromotion in RSA implant design analysis [22].Hence, the validity of the simplification assumptions has always to be evaluated against experimental measurements, aiming for a trade-off between acceptable computation times and prediction accuracy.
Screws were modeled following a previously described approach [28].This approach uses structural elements for the connection to the bone, which avoids the need of meshing screw holes and the associated computational cost related to additional contact analysis (Figure 3a).A script was implemented in Python 3.7 to automate the modeling process and include the screws in the Abaqus input file.As output of the design planning phase, five screw parameters could be extracted: position (head coordinates), length, direction, outer diameter and root diameter.
Boundary Conditions and Simulation Steps
Boundary and loading conditions mimicking the experimental set-up were applied.Specifically, the bottom and side faces of the rectangular metal box were fully constrained in all the directions.The axial load of 430 N was applied perpendicular to the glenoid plane through the glenosphere.A patch (10 mm radius) was defined on top of the glenosphere cup surface and all nodes lying inside were selected to apply the load (Figure 2).For the shear force, a patch of 1 mm radius was defined on the inferior side of the cup, as to simulate the horizontal load fixture (Figure 2a).
To estimate the baseplate displacement, measurement patches of nodes (1 mm radius, representative of the dial indicator tip) were also automatically defined on the baseplate surface, using the known direction vector of the load (Figure 2b).For example, when the shear load was imposed inferiorly (Figure 1a), the measurement patch was defined superiorly, centered at the Each screw was modeled as a wire connecting the head point (input parameter) to the endpoint (obtained with the length and direction vector) and penetrating the elements of the bone (Figure 3b,c).All the nodes of the bone elements lying around the wire and at a maximum distance equal to the outer screw radius were connected perpendicular to the wire with rigid connector elements.The screw head was fixed to the implant in a similar way, by connecting the node representing the head with the nodes within the baseplate holes.To mesh the screw wire, beam elements (B32, three-node) with a circular cross section equal to the root radius where used, imposing as nodes the calculated intersection points between wire and connector elements.Since titanium screws were used, a Young's modulus of 110,000 MPa and a Poisson's ratio of 0.3 were assigned as material properties.
To differentiate the mechanical behavior between locking and compression screws, additional assumptions were made.To model the loose connection between the unthreaded head of a compression screw and the implant, the stiffness of the first 2 mm of the screw shaft was set to 200 MPa, a value equal to the elastic modulus of the bone substitute [14].
Moreover, nonlocking screws provide an initial compression that constrains the implant towards the bone.The impact of this aspect on FE analysis was already examined in literature, demonstrating that the inclusion of preload in the model is a key parameter when investigating interface micromotion [29].For this reason, preload was explicitly modeled using the pretension section of Abaqus at the intersection of the screwed and nonscrewed portion of the shaft, similarly to the study of Virani et al. [13].For the current model, the input values of the insertion force were estimated based on experimental data [30].Briefly, a custom-made load sensor was built to measure the compression force generated by the screw head.Screws with different lengths were inserted into synthetic bone blocks (Sawbones; Malmö, Sweden) of 20 pcf and the force was acquired until failure of the bone substitute.This resulted in a maximum compression of 370 N and 420 N for the two screws used in the loosening test.Since those values were measured at failure loads, the pretensions in Abaqus were set to 260 N and 300 N, by taking 70% of the force to failure [14].
Boundary Conditions and Simulation Steps
Boundary and loading conditions mimicking the experimental set-up were applied.Specifically, the bottom and side faces of the rectangular metal box were fully constrained in all the directions.The axial load of 430 N was applied perpendicular to the glenoid plane through the glenosphere.A patch (10 mm radius) was defined on top of the glenosphere cup surface and all nodes lying inside were selected to apply the load (Figure 2).For the shear force, a patch of 1 mm radius was defined on the inferior side of the cup, as to simulate the horizontal load fixture (Figure 2a).
To estimate the baseplate displacement, measurement patches of nodes (1 mm radius, representative of the dial indicator tip) were also automatically defined on the baseplate surface, using the known direction vector of the load (Figure 2b).For example, when the shear load was imposed inferiorly (Figure 1a), the measurement patch was defined superiorly, centered at the intersection point between the load direction vector and the edge of the implant surface.
All analyses were performed in Abaqus/Standard 6.14 (Dassault Systèmes, Waltham, MA, USA).To solve the nonlinear equilibrium equations the Newton's method was used [31].A three-step analysis was implemented to mimic the experimental set-up and take into account the implemented surgical technique, which consists of inserting the compression screws first followed by the locking screws: in the first step, screw pretension was modeled (see Section 2.2.3), in the second step shear load was applied, followed by the axial load in the third step.
The end of the first step was considered as the initial state for the displacement analysis, similarly to the experimental set-up (pretension of the compression screws already present before the application of the loads).Consequently, the final baseplate displacement, used for model validation, was defined as the difference in the average displacement of the patch nodes between the second and third step.
Statistical Analyses and Sensitivity Study
Predicted implant stability values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Both the shear and axial components of the displacements were taken into account.A Spearman's rank order correlation test was used for comparing the consistency of results between the experimental and in silico analysis, with a significance level set to 0.05.Correlation coefficients whose magnitude were lower than 0.7, between 0.7 and 0.9 and higher than 0.9, indicated respectively a moderate, high and very high correlation [32].
Besides the baseplate displacement, shear and axial micromotion at the bone-implant interface were calculated using the FE method.These micromotions comprised the displacement values for all nodes on the contact surface.Since nonmanifold nodes were created at the bone-implant interface, micromotion was defined as the relative motion between the corresponding nodes after application of the loads.In particular, for each contact node on the implant surface, micromotion U P was calculated as: where R P and R B are the vector positions of the node on the prosthesis (p) and its corresponding one on the bone surface (B), respectively.Shear (U t ) and axial (U n ) micromotion were then calculated by projecting the total micromotion on the corresponding loading direction vectors, as follows: where t and n respectively represent the unit vector of the directions along which shear and axial load were applied.The total relative micromotion between glenoid baseplate and bone, is further referred to as peak micromotion [33] and was visualized as a color map on the back of the prosthesis.
To evaluate the impact of changes in the model parameters on the FE output interface micromotion, a sensitivity analysis was performed.In particular, changes in the bone substitute material properties, the friction coefficient and the screw preload were investigated.A summary of these numerical tests is presented in Table 1.Each parameter was modified independently, for a total of 24 simulations (six for each loading condition).For the stiffness of the bone surrogate, the Young's modulus was modified to mimic the properties of 15 pcf (osteoporotic bone) and 30 pcf foam blocks, corresponding to 150 MPa and 553 MPa respectively [16,26].
The Coulomb's coefficient was adapted to simulate local changes at the bone-implant interface by imposing values of 0.5 and 0.7, which are representative of the friction ranges found in literature.
Finally, a change in the preload of the compression screws was applied, modifying by ±20% the baseline pretension value.
A paired t-test was used to compare the peak micromotion of the baseline model with each sensitivity model, with a significance level set to 0.01, following a Bonferroni correction of the alpha value (α = 0.05, n = 6: α/n ≈ 0.01).
Results
FE results for the baseplate displacement were within the variability of the experimental measurements for all loading directions (Figure 4).The smallest displacements were found when the shear load was applied inferiorly to the baseplate.The Spearman's rank order test revealed a statistically significant (p < 0.05) high correlation (ρs = 0.81) between the experimental results and FE results.
Predicted implant stability values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Both the shear and axial components of the displacements were taken into account.A Spearman's rank order correlation test was used for comparing the consistency of results between the experimental and in silico analysis, with a significance level set to 0.05.Correlation coefficients whose magnitude were lower than 0.7, between 0.7 and 0.9 and higher than 0.9, indicated respectively a moderate, high and very high correlation [32].
Besides the baseplate displacement, shear and axial micromotion at the bone-implant interface were calculated using the FE method.These micromotions comprised the displacement values for all nodes on the contact surface.Since nonmanifold nodes were created at the bone-implant interface, micromotion was defined as the relative motion between the corresponding nodes after application of the loads.In particular, for each contact node on the implant surface, micromotion UP was calculated as: where RP and RB are the vector positions of the node on the prosthesis (p) and its corresponding one on the bone surface (B), respectively.Shear (Ut) and axial (Un) micromotion were then calculated by projecting the total micromotion on the corresponding loading direction vectors, as follows: where ̂ and respectively represent the unit vector of the directions along which shear and axial load were applied.The total relative micromotion between glenoid baseplate and bone, is further referred to as peak micromotion [33] and was visualized as a color map on the back of the prosthesis.
To evaluate the impact of changes in the model parameters on the FE output interface micromotion, a sensitivity analysis was performed.In particular, changes in the bone substitute material properties, the friction coefficient and the screw preload were investigated.A summary of these numerical tests is presented in Table 1.Each parameter was modified independently, for a total of 24 simulations (six for each loading condition).
For the stiffness of the bone surrogate, the Young's modulus was modified to mimic the properties of 15 pcf (osteoporotic bone) and 30 pcf foam blocks, corresponding to 150 MPa and 553 MPa respectively [16,26].For the FE analysis, predicted values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Data were normalized to the largest micromotion measured in any of the tests.For each of the four main implant directions, both axial and shear displacements were measured.Gray points represent outliers in the measurements.
The maximum interface micromotion was found for the anterior shear load (Figure 5).For all the loading directions, the median peak micromotion was lower than 50 µm.A 95th percentile of 141 µm, 80 µm, 73 µm and 25 µm was reported for the anterior, posterior, superior and inferior loading respectively.When looking at the axial and shear components, the median shear micromotion was always higher than the axial.For none of the loading directions, micromotion above 150 µm was reported (Figure 6).For the FE analysis, predicted values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Data were normalized to the largest micromotion measured in any of the tests.For each of the four main implant directions, both axial and shear displacements were measured.Gray points represent outliers in the measurements.
The maximum interface micromotion was found for the anterior shear load (Figure 5).For all the loading directions, the median peak micromotion was lower than 50 µm.A 95th percentile of 141 µm, 80 µm, 73 µm and 25 µm was reported for the anterior, posterior, superior and inferior loading respectively.When looking at the axial and shear components, the median shear micromotion was always higher than the axial.For none of the loading directions, micromotion above 150 µm was reported (Figure 6).The sensitivity of the model to input parameters showed a peak micromotion for the baseline model which was significantly different (p < 0.01) when compared to the model with reduced and increased elastic moduli of bone substitute, for all the loading directions (Figure 7).For the anterior loading, which reported the highest micromotion values, significant differences were also found between the baseline model and the one with reduced/increased compression screws pretension.The sensitivity of the model to input parameters showed a peak micromotion for the baseline model which was significantly different (p < 0.01) when compared to the model with reduced and increased elastic moduli of bone substitute, for all the loading directions (Figure 7).For the anterior loading, which reported the highest micromotion values, significant differences were also found between the baseline model and the one with reduced/increased compression screws pretension.The sensitivity of the model to input parameters showed a peak micromotion for the baseline model which was significantly different (p < 0.01) when compared to the model with reduced and increased elastic moduli of bone substitute, for all the loading directions (Figure 7).For the anterior loading, which reported the highest micromotion values, significant differences were also found between the baseline model and the one with reduced/increased compression screws pretension.
Discussion
In this study, an automated workflow to evaluate the preclinical stability of a shoulder implant through FE simulations was presented and validated.To our knowledge, this is the first work to report a full in silico modeling of ASTM F2028-14 for a custom-made prosthesis.Although previous studies [13,14,22] reported FE analysis for a similar experimental set-up, the effect of different loading directions, which in case of a custom implant cannot be neglected due to the asymmetry of the design shape, was never systematically investigated.This approach resulted in a total of eight measurements that were used to support the FE predictions.
The results of the mechanical test showed an influence of the loading direction on the implant stability.In particular, the presented design reported the lowest displacements when the shear load was applied inferiorly to the glenosphere.This is mainly due to the presence of two screws, one locking and one compression, in the superior part of the baseplate, which are almost perpendicular to the direction of the inferior load and opposite to its application point.Instead, the highest displacements were measured for the anterior loading directions, due to the absence of a good screw fixation at the anterior side.These results further corroborate the idea that each new implant should be tested in those different conditions.
All the experimental measurements showed a high variability.Although one unique design was tested with six samples, this variability is likely to reflect the variations that occurred during the production of the implants and the assembly of the different components.The 3D printed technique used for fabrication could introduce inaccuracies, especially in the porous structure, which influenced the mechanical measurements.Similarly, the bone substitute blocks were artificially carved to match the nonflat baseplate surface, possibly causing additional variation.
Direct comparison of the experimental outcomes with previous studies is not possible due to major methodological divergences.Higher mechanical loads were used to test standard implants (750 N both in axial and shear) and only the shear displacement was measured when the load was applied superiorly [12,13,15].Under this configuration, the presented work reported slightly higher shear values (Figure 4, inferior direction), meaning that the effect of a smaller applied load was compensated by the use of a custom implant with nonstandard design (e.g., nonflat contact surface, asymmetry of the shape).
The good agreement between experimental and FE-predicted micromotions was confirmed by a Spearman's rank test, resulting in a correlation coefficient of 0.81 (high), which is lower than the one reported by Virani et al. (0.96, [13]).The lower correlation coefficient can be explained by the use of a custom design, which leads to additional complexity in the simulation.Similar to Virani et al. [13] overstiffening of the model was observed, which, in the context of this study, can be partially explained by the use of linear tetrahedral elements in the meshing process, a choice justified by the need of low computational cost.
One limit of the standard mechanical test presented here is related to the lack of micromotion measurements at the bone-implant surface.In contrast, FE modeling can provide a valuable insight on the interface behavior, although their accuracy cannot be directly evaluated against experimental outputs.As previously described, micromotion above 150 µm can jeopardize bone ingrowth and lead to an unstable fixation [9].Design engineering should take into account this aspect when looking for possible design adaptations.For this reason, interface micromotion was estimated through the FE model.When evaluating the two separated components, higher median values were reported for the shear component.These results are in accordance with previous studies indicating that micromotion of reverse implants occurs mainly in shear [34].For none of the loading directions peak micromotion was found to be higher than 150 µm, suggesting that the implant design does not jeopardize bone ingrowth.Additionally, the highest values were calculated at the edge of the interface, where osseointegration is less likely to happen.
The interface micromotions predicted by the FE model were sensitive to changes in some of the input parameters: the FE model was sensitive under all the loading directions to a change in bone quality (150 MPa and 553 MPa), similarly to what has been reported in the literature [14].Moreover, this study corroborates the idea that the impact of an adequate modeling of the compression screws cannot be neglected [29].A change in the screw pretension can lead to very different micromotion, thus suggesting that pretension should always be included in the simulation and its value estimated or derived through experimental measurements.
The generalizability of these results is subject to certain limitations which need to be addressed.Major assumptions were made during the creation of the in-silico model, looking for a trade-off between accuracy and computational cost.The bone substitutes were modeled with homogeneous isotropic material properties, a simplification commonly accepted and implemented in the literature [13,14,16,22], although not fully representative of the behavior of the bone substitute.The porous structure of the implant was not explicitly modelled to reduce the complexity of the model.As an alternative, a lower elastic modulus was used for the corresponding elements.While this assumption impacts the frictional behavior at the interface, the sensitivity showed that a change in this parameter did not substantially influence the micromotion estimations (at least in the configurations where highest values were reported).
While 150 µm is the ASTM accepted threshold to promote osseointegration [20], its application has been challenged in the literature.Other studies [15,35] referred to lower values (20 µm-50 µm) during the evaluation of interface micromotion.When lowering the threshold, the presented model would still predict bone ingrowth in the inner region of the prosthesis, however these results should be interpreted carefully and always considering the simplifications of the study.
The automated workflow was built to replicate only the static analysis described in the ASTM standard and additional efforts should be made to include the dynamic loading, which are probably not compatible with the requirement of a low computational workflow.However, it can be assumed that minimizing the initial static displacement with an optimized design, will also lead to better fatigue outcome.
Validation of the model was obtained only for a single design and under a relatively limited degree of freedom.It is believed that a more complete experimental set of tests is necessary, at least to assess the impact of additional design changes (e.g., number and type of screws) and to ensure the validity of the assumptions made.To further strengthen the predictive power of the simulation, alternative micromotion metrics would be necessary since the current mechanical set-up fails to provide a direct measure of the full-field interface micromotion [29,35].
In summary, the automated workflow presented in this study was able to replicate the mechanical condition of a standard test for a patient-specific shoulder implant.The finite element analysis can potentially support the engineers during the design phase, by virtually comparing different implants.Moreover, the minimization of the interface micromotion would lead to an improved initial stability and hence to a better clinical outcome, by allowing for secondary fixation through bone ingrowth and reducing the risk of revision surgery due to mechanical loosening.Finally, the presented tool could be used to define which configurations need to be tested when looking for worst case scenarios, thus reducing the amount of required mechanical experiments.
Figure 1 .
Figure 1.Left (a), top view of the custom implant with the four main directions: anterior, posterior, superior and inferior.Four screws were used to fix the implant: two locking (L) and two nonlocking (compression, C).Right (b), experimental set-up with a shear load (red arrow) applied inferiorly via a horizontal loading fixture.Axial load was applied through the glenosphere (blue arrow).Axial and shear components of the baseplate displacement were measured superiorly with two dial indicators (green arrows).
Figure 1 .
Figure 1.Left (a), top view of the custom implant with the four main directions: anterior, posterior, superior and inferior.Four screws were used to fix the implant: two locking (L) and two nonlocking (compression, C).Right (b), experimental set-up with a shear load (red arrow) applied inferiorly via a horizontal loading fixture.Axial load was applied through the glenosphere (blue arrow).Axial and shear components of the baseplate displacement were measured superiorly with two dial indicators (green arrows).
Figure 2 .
Figure 2. Left (a), isometric view of the finite element (FE) model with a shear load applied inferiorly.In blue the patch defined for the application of the axial load, in red the shear load patch.Right (b),
Figure 2 .
Figure 2. Left (a), isometric view of the finite element (FE) model with a shear load applied inferiorly.In blue the patch defined for the application of the axial load, in red the shear load patch.Right (b), superior view of the FE model.In green the measurement patch defined to calculate the baseplate displacement.
Figure 3 .
Figure 3. Left (a), top view of the model and the four screws.In blue the connectors between screw head and implant.Right (b), detail of one screw (implant transparent).Right (c), the generated screw model.
Figure 3 .
Figure 3. Left (a), top view of the model and the four screws.In blue the connectors between screw head and implant.Right (b), detail of one screw (implant transparent).Right (c), the generated screw model.
Figure 4 .
Figure 4. Baseplate displacement measured experimentally (boxplot) and determined from the model (red dots).For the FE analysis, predicted values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Data were normalized to the largest micromotion measured in any of the tests.For each of the four main implant directions, both axial and shear displacements were measured.Gray points represent outliers in the measurements.
Figure 4 .
Figure 4. Baseplate displacement measured experimentally (boxplot) and determined from the model (red dots).For the FE analysis, predicted values were calculated as the average of the displacements for the nodes lying in the measurement patch, as defined in Section 2.2.3.Data were normalized to the largest micromotion measured in any of the tests.For each of the four main implant directions, both axial and shear displacements were measured.Gray points represent outliers in the measurements.
Figure 5 .
Figure 5. Interface micromotion.Shear and axial components of the total micromotion (peak) was evaluated for all the loading directions.The red dashed line represents the 150 µm threshold.
Figure 5 .
Figure 5. Interface micromotion.Shear and axial components of the total micromotion (peak) was evaluated for all the loading directions.The red dashed line represents the 150 µm threshold.
Figure 6 .
Figure 6.Back view of the implant.Peak micromotion map at the bone-implant interface for all the loading directions.
Figure 6 .
Figure 6.Back view of the implant.Peak micromotion map at the bone-implant interface for all the loading directions.
Figure 6 .
Figure 6.Back view of the implant.Peak micromotion map at the bone-implant interface for all the loading directions.
Table 1 .
Parameter variation for the sensitivity analysis. | 9,200.8 | 2020-07-06T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Dramatic Increase in Oxidative Stress in Carbon-Irradiated Normal Human Skin Fibroblasts
Skin complications were recently reported after carbon-ion (C-ion) radiation therapy. Oxidative stress is considered an important pathway in the appearance of late skin reactions. We evaluated oxidative stress in normal human skin fibroblasts after carbon-ion vs. X-ray irradiation. Survival curves and radiobiological parameters were calculated. DNA damage was quantified, as were lipid peroxidation (LPO), protein carbonylation and antioxidant enzyme activities. Reduced and oxidized glutathione ratios (GSH/GSSG) were determined. Proinflammatory cytokine secretion in culture supernatants was evaluated. The relative biological effectiveness (RBE) of C-ions vs. X-rays was 4.8 at D0 (irradiation dose corresponding to a surviving fraction of 37%). Surviving fraction at 2 Gy (SF2) was 71.8% and 7.6% for X-rays and C-ions, respectively. Compared with X-rays, immediate DNA damage was increased less after C-ions, but a late increase was observed at D10% (irradiation dose corresponding to a surviving fraction of 10%). LPO products and protein carbonyls were only increased 24 hours after C-ions. After X-rays, superoxide dismutase (SOD) activity was strongly increased immediately and on day 14 at D0% (irradiation dose corresponding to a surviving fraction of around 0%), catalase activity was unchanged and glutathione peroxidase (GPx) activity was increased only on day 14. These activities were decreased after C-ions compared with X-rays. GSH/GSSG was unchanged after X-rays but was decreased immediately after C-ion irradiation before an increase from day 7. Secretion of IL-6 was increased at late times after X-ray irradiation. After C-ion irradiation, IL-6 concentration was increased on day 7 but was lower compared with X-rays at later times. C-ion effects on normal human skin fibroblasts seemed to be harmful in comparison with X-rays as they produce late DNA damage, LPO products and protein carbonyls, and as they decrease antioxidant defences. Mechanisms leading to this discrepancy between the two types of radiation should be investigated.
Introduction
Effects of conventional radiation therapy (RT) using low-LET (linear energy transfer) X-rays on tumours and on normal tissues have been investigated for decades. Proton therapy, which is a more recent RT modality, has proven effective on tumours with a more precise dose delivery. C-ion therapy should have the same advantages with a higher RBE. Indeed, when proton RBE is considered as 1.15 compared with X-rays [1], the RBE of C-ions is estimated to 2 to 3 in tumours [2]. However, C-ion hadrontherapy is still underinvestigated, especially concerning normal tissues. Some in vitro studies showed no difference in RBE 10% of tumour cells vs. normal cells after C-ion irradiation [2]. Moreover, recent reports showed acute and late skin complications after C-ion RT [3]. Of 35 patients treated for unresectable bone and soft tissue sarcoma by a dose escalation protocol, 35 and 27 presented acute or late skin reactions, respectively, after exposure to doses ranging from 52.8 to 73.6 GyE in 16 fixed fractions. They were followed up from 29.5 to 71.7 months after C-ion RT. Late skin reactions reached grade IV (RTCOG/EORTC Scoring System). It was long considered that radiation-induced late cutaneous injury was only due to the delayed mitotic death of dermal parenchymal [4] or vascular cells, thus explaining that the lesions are progressive and inevitable. But in vitro studies have demonstrated an active role of dermal fibroblasts and endothelial cells in response to irradiation by the use of an antiinflammatory and antioxidant treatments [5,6], which have proven very effective in patients presenting late skin complications [7].
Oxidative stress (OS) is an important pathway leading potentially to cell death after irradiation through oxidative damage to biological macromolecules when antioxidant defences are overwhelmed. While DNA is considered as the main target of radiation by direct or indirect effects, it is now thought that ROS (Reactive Oxygen Species) are greatly involved in cellular DNA and macromolecule damage as they are produced in early and late waves and are maintained over a long period of time after irradiation. Few studies have been performed on OS occurring after C-ion irradiation. Wan et al. [8] reported that peroxide production was similar in human epithelial cells after proton or X-ray irradiation, but was reduced after 56 Fe ion irradiation. They observed that a selection of antioxidants delivered alone or in combination and administered either before or during irradiation protected MCF10 breast epithelial cells irradiated with X-rays, γ-rays, protons, or HZE (high Z and high energy) particles against OS [9]. In vivo studies were also performed with antioxidants given to rodents exposed to HZE particles vs. protons or γ-rays [10][11][12][13]. These compounds protected against OS as measured in plasma using the Total Antioxidant Status assay. Concerning C-ion irradiation especially, a study of gene regulation of the oxidative stress pathway in vitro showed an increase in heme oxygenase-1 and NAD(P)H dehydrogenase-quinone-1 after Cion irradiation [14], but no X-ray irradiation was performed for comparative purposes. In vivo, gene regulation of mouse tumours transplanted in C3H/HeNrs mice was not modified after X-ray or C-ion exposure [15]. Taken together, these results tend to indicate that OS plays a major role after high-LET radiation.
In the present work, we were interested in human skin fibroblasts from young adult healthy individuals, as skin is the first organ exposed during RT. Cells were irradiated at confluence (G 0 ) to mimic skin physiology, either with 5 MV Xrays or with 75 MeV/n C-ions corresponding to a real energy to cells of 72 MeV/n (LET = 33.6 keV/µm) [16]. This C-ion energy delivered at the GANIL facility (Caen, France) corresponds to the delivered dose in the plateau-phase before the Spread-Out Bragg Peak (SOBP). This was well adapted to our experiments as skin is exposed to this energy. OS parameters were measured until 21 days after X-ray or C-ion irradiation at D 10% and D 0% corresponding to ~10% and ~0% survival, respectively. Concerning C-ion irradiation, the doses corresponded to the range of 3.3 to 4.6 GyE/16 fractions delivered at NIRS (Chiba, Japan) when acute and late skin reactions were encountered [3].
X-ray and C-ion irradiation
Confluent cells were irradiated at room temperature either with X-rays using an Orion generator (CGR MEV, Riverside, CA, USA; 5 MV, 1.4 Gy/min) or with C-ions on the D1 line of the GANIL accelerator (Caen, France; 75 MeV/n, 1 Gy/min) [16]. C-ion irradiation was carried out in the LARIA facility during different runs between 2008 and 2011. Cells were kept until 21 days after irradiation and medium was replaced twice a week.
Clonogenic survival
Eighteen hours after irradiation at confluence, cells were trypsinized. Clonogenic assessment was done according to the historical method described by Puck et al. [17]. Briefly, 1000 to 5000 cells were plated in 25 cm 2 tissue culture flasks. Colonies ≥50 cells were scored 10-14 days after irradiation and the surviving fraction (SF) was calculated taking into account control cell plating efficiency. SF was fitted using the linearquadratic model. For subsequent experiments, chosen irradiation doses were approximately D 10% and D 0% corresponding respectively to 2 and 6 Gy for C-ions and to 6 and 10 Gy for X-rays.
Alkaline comet assay
The alkaline single-cell gel electrophoresis assay described by Laurent et al. [5] was used. The mean Olive Tail Moment (OTM) immediately, 1 hour or 3 hours after irradiation was calculated using the computer image analysis software Casp.
Protein carbonyls in cell homogenates
Protein carbonylation was measured using the Millipore OxyELISA TM Oxidized Protein Quantitation kit. Cells were lysed by a thermal shock in [Tris/HCl 10 mM, Triton 0.1%, sucrose 200 mM] buffer.
Superoxide dismutase, catalase and glutathione peroxidase activities in cell homogenates
Total SOD, catalase and GPx activities were measured using Calbiochem assay kits as described by the manufacturer. Cells were lysed by a thermal shock in [Tris/HCl 10 mM, Triton 0.1%, sucrose 200 mM] buffer.
Reduced and oxidized glutathione ratio in cell homogenates
Reduced and oxidized glutathione ratios (GSH/GSSG) were measured using a Calbiochem assay kit as described by the manufacturer. Cells were lysed by a thermal shock in [Tris/HCl 10 mM, Triton 0.1%, sucrose 200 mM] buffer.
Cytokine concentrations in supernatants
TNF-α, IL-6 and IL-1β in cell supernatants were quantified by means of a chemiluminescent enzyme immunometric assay employing an IMMULITE® 1000 automated analyser (Siemens Healthcare Diagnostics S.A.S). The sensitivity of the assay was 2, 4 and 5 pg/mL for IL-6, TNF-α and IL-1β, respectively.
Statistical analysis
Results were normalized to control values. Data are depicted as mean ± SEM. ** for p<0.001 or * for p<0.05 for X-irradiated cells compared with control cells and † † for p<0.001 or † for p<0.05 for C-ion compared with X-ray irradiated cells (one-way ANOVA with the Tukey test). Each experiment was done independently in triplicate.
Survival
Fibroblast survival fractions were greatly decreased after Cion compared with X-ray irradiation ( Figure 1A). SF2 and D 0 values presented a 9.5-fold and a 4.6-fold decrease after C-ion vs. X-ray irradiation with 7.6% vs. 71.8% and 0.8 Gy vs. 3.7 Gy, respectively ( Figure 1B). RBE values were 4.77 for 37% and 3.28 for 10% survival, respectively. The α value representing radiation sensitivity was 25-fold higher after Cions compared with X-rays. In contrast, the β value was 3-fold lower after C-ions compared with X-rays, with a value of 0.02 explaining the almost linear shapes of the C-ion survival curve. The α/β ratio was increased 83-fold after C-ions compared with X-rays, with respective values of 66.6 and 0.8. D 10% and D 0% irradiation doses chosen for subsequent experiments corresponded to 6 Gy and 10 Gy for X-rays and 2 Gy and 6 Gy for C-ions, respectively.
DNA damage
DNA single-and double-strand breaks as well as alkali-labile sites were quantified by means of the alkaline comet assay (Figure 2). OTMs were increased immediately after X-rays in a dose-dependent manner. Immediately after C-ion irradiation, OTMs were less increased than after X-rays, with an RBE 10% of DNA damage induction of 0.62. One hour after irradiation, OTMs returned to control levels for C-ion irradiated but not for X-irradiated fibroblasts. At 3 hours, a new increase in OTMs occurred only in C-ion irradiated fibroblasts, with an RBE 10% of 2.80.
Lipid peroxidation and protein carbonylation products
ROS lead to polyunsaturated fatty acid peroxidation. Lipid hydroperoxides are degraded mainly into MDA and HAEs, which react in a covalent manner with proteins and inactivate them. MDA and HAE lipid peroxidation products were unchanged after X-ray irradiation except for D 0% immediately and at day 21 ( Figure 3A). After C-ion irradiation, an increase was mainly observed at day 1 with an RBE 10% of 2.05. Carbonyl groups resulting from protein oxidation were quantified ( Figure 3B). Their quantity was unchanged by X-rays except for a decrease for D 10% at day 21. After C-ion exposure, a 2.8-fold increase was observed at day 1 for D 0% . For D 10% , a 1.8-fold decrease was observed at day 7 and the concentration of carbonyl groups finally reached ~0 at day 21. A non-significant increase for D 10% and D 0% occurred 14 days after C-ion irradiation with an RBE 10% of 4.83.
Antioxidant enzyme activities
The main three antioxidant enzyme activities were quantified. After X-rays, total SOD activity was increased immediately and at day 14 at D 0% ( Figure 4A). After C-ion irradiation, SOD activity was decreased compared with X-rays (except for D 10% immediately after irradiation), with an RBE 10% of 0.58 at day 7. Catalase activity was unchanged after X-rays, but was decreased after C-ions compared with X-rays (except for D 0% at day 21), with an RBE 10% of 0.59 at day 7 ( Figure 4B). Like catalase activity, GPx activity was unchanged after X-rays except for a slight significant increase at day 14, but was decreased after C-ions compared with X-rays (except at day 21 where an increase was observed), with an RBE 10% of 0.59 at day 7 ( Figure 4C).
Reduced and oxidized glutathione ratio
Glutathione, which represents a major antioxidant defence system, is a tripeptide that can give an electron or hydrogen atom to a peroxide in an oxidation reaction catabolized by GPx. Glutathione is then in its reduced form (GSH) or its oxidized form (GSSG). GSSG must be reduced again to GSH by the action of glutathione reductase. The ratio GSH/GSSSG is usually used as a cellular indicator of redox potential as glutathione is responsible for the accumulation of hydrogen peroxide and the generation of severe OS. In X-ray irradiated fibroblasts, GSH/GSSG ratio was not significantly changed, whereas a decrease in comparison with X-rays was observed immediately after C-ion irradiation before an increase from day 7, with an RBE 10% of 2.09 at day 7 ( Figure 5).
TNF-α, IL-1β and IL-6 secretion
TNF-α, IL-1β and IL-6 are mediators of the inflammatory response and can be secreted by activated macrophages, T cells, smooth muscle cells and fibroblasts. IL-1β in culture supernatants could not be detected and TNF-α was unchanged after X-ray or C-ion irradiation (data not shown). An increase in IL-6 concentration occurred from day 14 after X-rays reaching a maximum at day 21 with a 2.7-fold increase for D 10% and a 3.8-fold increase for D 0% (Figure 6). After C-ions and in comparison with X-rays, an increase occurred at day 7 for D 10% and D 0% with an RBE 10% of 1.35 before a decrease on days 14 and 21 only for D 10% , with an RBE 10% of 0.66.
Discussion
In the present work, normal human skin fibroblasts were exposed to 33.6 keV/µm C-ion beams [16]. This LET corresponding to plateau-phase before SOBP in C-ion RT was adapted to our experiments on dermal fibroblasts as skin is the first organ exposed at the entrance of the beam. Skin complications were reported for the first time by Yanagi et al. [3] concerning patients treated with C-ions. Thirty-five patients treated for unresectable bone and soft tissue sarcoma by a dose escalation protocol were studied: 35 presented acute skin reactions and 27 developed late skin reactions.
Survival curves and resulting radiobiological parameters showed a greater harmful effect of C-ions than of X-rays, with an RBE 37% of 4.77. Moreover, taking into account uncertainties concerning GANIL C-ion dosimetry as reported by Pautard et al. [18], the discrepancy between C-ions and X-rays may be greater than thought. Confluent fibroblasts were not as radiosensitive to X-rays when α and β values were low. In contrast, the fibroblast survival curve after C-ion irradiation almost followed a linear model with a high α value and a β value close to 0, suggesting that primary DNA damage was repaired with difficulty. Irradiation doses chosen for subsequent experiments corresponded to around D 10% and D 0% . Chosen Cion irradiation doses were in the range of 3.3 to 4.6 GyE/ fraction used at NIRS and responsible for skin complications [3]. To understand the origins of this deleterious effect of Cions compared with X-rays, we were interested in the OS pathway, which may explain the appearance of late cutaneous damage.
In the first hour after irradiation, DNA damage was increased less after C-ions than after X-rays. C-ion damage was produced in clusters leading to smaller DNA fragments compared with X-rays. The comet assay takes into account the quantity of DNA in the comet tail so X-rays should produce larger fragments, thus increasing the measured OTMs. In this way, the alkaline comet assay may underestimate DNA damage produced by C-ions. Interestingly, an increase in DNA damage was observed 3 hours after C-ion irradiation. This increase could be induced by (i) new ROS production, (ii) secondary strand-breaks produced during the DNA repair process, as intermediate breaks before ligation, or (iii) DNA misrepair producing damage. As it is well known that DNA damage clusters may be wrongly repaired, the third hypothesis should be preferred. Moreover, the almost linear shape of the fibroblast survival curve after C-ion irradiation suggested DNA damage difficult to repair, which is in agreement with DNA damage clusters. In addition to DNA damage, ROS induced by X-rays and Cions cause primary lesions that may affect lipids and proteins. Carbonyl groups result from protein oxidation. ROS can also induce polyunsaturated fatty acid peroxidation. Lipid hydroperoxides are degraded mainly into MDA and HAEs, which react covalently with and may inactivate proteins. Lipid peroxidation and protein carbonylation measurements were not greatly changed. The main endpoint occurred at day 1 with an increase in MDA and HAEs and in carbonyl values in fibroblasts irradiated by C-ions compared with X-rays. This early wave of lipid and protein degradation products could be reduced at later times due to lipid and protein repair.
Surprisingly, the three main antioxidant enzyme activities were decreased after C-ion irradiation. This decreased detoxifying capacity after C-ion irradiation suggests that antioxidant enzymes were not active after irradiation, due either to a transcription decrease or to an inhibition, except for GPx activity on day 21 suggesting a late increase in transcription perhaps due to the prolonged exposure to oxidative stress. GSH/GSSG ratio was increased from day 7 after C-ion irradiation compared with X-rays. This increase was not due to an increase in GSH synthesis as its amount was not much changed after irradiation except at day 21 where a strong decrease was observed after X-rays (data not shown). This GSH/GSSG increase could be linked to the decrease in the activity of GPx which uses GSH to detoxify peroxide. It seems worthwhile evaluating glutathione reductase activity as this enzyme recycles GSSG into GSH. Overall, our results suggest that C-ion irradiation induced an imbalance between ROS production and ROS detoxification processes leading to persistent oxidative lesions in normal skin fibroblasts. Finally, we investigated expression of inflammatory mediators, which is generally linked to ROS production. Surprisingly, no significant changes in TNF-α expression were observed after C-ion vs. X-ray irradiation. Moreover, IL1-β levels were undetectable. IL-6 level increases after both types of irradiation according to the literature [19]. Interestingly, IL-6 concentration was increased more on day 7 after C-ions than after X-rays. However, IL-6 level after D 10% C-ion irradiation was lower on days 14 and 21 compared with X-rays. In this way, inflammatory pathways did not seem to be strongly involved in the deleterious effects observed after C-ion irradiation of skin fibroblasts, except for IL-6 on day 7.
Our experiments suggest that macromolecular damage was greatly increased and that antioxidant defences were much decreased during the first three weeks after C-ion compared with X-ray irradiation. Taken together, these data could explain, at least in part, the late cutaneous and sub-cutaneous complications reported by Yanagi et al. [3]. Further work is needed to understand the reasons for this increase in OS in normal human skin fibroblasts exposed to C-ions. Studies on gene regulation of oxidative metabolism, DNA damage and repair, and senescence pathways are already in progress. Studies on other cell types (haematopoietic stem/progenitor cells, oral squamous cell carcinoma) have been reported [14,20]. But C-ion irradiation was not compared with X-rays and values of interest were not related to protein concentration. | 4,397.4 | 2013-12-23T00:00:00.000 | [
"Medicine",
"Physics"
] |
Bifurcations of Traveling Wave Solutions for the Coupled Higgs Field Equation
By using the bifurcation theory of dynamical systems, we study the coupled Higgs field equation and the existence of new solitary wave solutions, and uncountably infinite many periodic wave solutions are obtained. Under different parametric conditions, various sufficient conditions to guarantee the existence of the above solutions are given. All exact explicit parametric representations of the above waves are determined.
Introduction
Recently, by using an algebraic method, Hon and Fan 1 studied the following coupled Higgs field equation: The Higgs field equation 2 describes a system of conserved scalar nucleons interacting with neutral scalar mesons.Here, real constant v represents a complex scalar nucleon field and u x, t a real scalar meson field.Equation 1.1 is the coupled nonlinear Klein-Gordon equation for α < 0, β < 0 and the coupled Higgs field equation for α > 0, β > 0. The existence of N-soliton solutions for 1.1 has been shown by the Hirota bilinear method 3 .
It is very important to consider the bifurcation behavior for the traveling wave solutions of 1.1 .In this paper, we consider 1.1 and its traveling wave solutions in the form of International Journal of Differential Equations Substitute 1.2 into 1.1 and for c 2 − 1 / 0 reduce system 1.1 to the following system of ordinary differential equations: where " " is the derivative with respect to ξ. Integrating second equation of 1.3 once and integrating third equation of 1.3 twice, respectively, we have where g 2 / 0, g 1 are integral constants.Substituting 1.4 into first equation of 1.3 , we have Equation 1.5 is equivalent to the two-dimensional systems as follows: with the first integral 6 is a 3-parameter planar dynamical system depending on the parameter group a, b, e .For a fixed a, we will investigate the bifurcations of phase portraits of 1.6 in the phase plane φ, y as the parameters b, e are changed.Here we are considering a physical model where only bounded traveling waves are meaningful.So we only pay attention to the bounded solutions of 1.6 .
Suppose that φ ξ is a continuous solution of 1.6 for ξ ∈ −∞, ∞ and lim ξ → ∞ φ ξ a 1 , lim ξ → −∞ φ ξ a 2 .Recall that i φ x, t is called a solitary wave solution if a 1 a 2 ; ii φ x, t is called a kink or antikink solution if a 1 / a 2 .Usually, a solitary wave solution of 1.6 corresponds to a homoclinic orbit of 1.6 ; a kink or antikink wave solution 1.6 corresponds to a heteroclinic orbit or the so-called connecting orbit of 1.6 .Similarly, a periodic orbit of 1.6 corresponds to a periodically traveling wave solution of 1.6 .Thus, to investigate all possible bifurcations of solitary waves and periodic waves of 1.6 , we need to find all periodic annuli and homoclinic orbits of 1.6 , which depend on the system parameters.The bifurcation theory of dynamical systems see 4-11 plays an important role in our study.
The paper is organized as follows.In Section 2, we discuss bifurcations of phase portraits of 1.6 , where explicit parametric conditions will be derived.In Section 3, all explicit parametric representations of bounded traveling wave solutions are given.Section 4 contains the concluding remarks.
Bifurcations of Phase Portraits of 1.6
In this section, we study all possible periodic annuluses defined by the vector fields of 1.6 when the parameters b, e are varied.
2.1
Now, the straight lines φ 0 is an integral invariant straight line of 2.1 .Denote that When φ φ ± ± −2b/3, f φ ± 0. We have which implies the relations in the b, e -parameter plane Thus, we have the following.
Let M φ e , y e be the coefficient matrix of the linearized system of 2.1 at an equilibrium point φ e , y e .Then, we have J φ e , 0 det M φ e , 0 aφ 3 e f φ e −2aφ 6 e 3φ 2 e 2b .
2.5
By the theory of planar dynamical systems, we know that for an equilibrium point of a planar integrable system, if J < 0, then the equilibrium point is a saddle point; if J > 0 and Trace M φ e , y e 0, then it is a center point; if J > 0 and Trace M φ e , y e 2 −4J φ e , y e > 0 then it is a node; if J 0 and the index of the equilibrium point is 0, then it is a cusp; otherwise, it is a high-order equilibrium point.
For the function defined by 1.8 , we denote that We next use the above statements to consider the bifurcations of the phase portraits of 2.1 .In the b, e -parameter plane, the curves L and the straight line e 0 partition it into 4 regions shown in Figure 1.
We use Figures 2 and 3 to show the bifurcations of the phase portraits of 2.1 .Notice that for a > 0, e < 0, b, e ∈ III IV or for a < 0, e > 0, b, e ∈ I II , and we have ae < 0, So that we would not give the phase portrait of 2.1 for these cases.
Case 1 a > 0 .We use Figure 2 to show the bifurcations of the phase portraits of 2.1 .
Case 2 a < 0 .We use Figure 3 to show the bifurcations of the phase portraits of 2.1 .
Exact Explicit Parametric Representations of Traveling Wave
Solutions of 1.6 In this section, we give all exact explicit parametric representations of solitary wave solutions and periodic wave solutions.Denote that sn x, k is the Jacobian elliptic functions with the modulus k and ϕ, α 2 , k is Legendre's incomplete elliptic integral of the third kind see 12 .
1 Suppose that a > 0, b, e ∈ II .Notice that H φ 1 , 0 h 1 , corresponding to H φ, y h 1 defined by 1.8 , and we see from 1.6 that the arch curve connects A φ 1 , 0 see Figure 2 b .The arch curve has the algebraic equation where ψ 3 > ψ 2 > 0 > ψ 1 satisfies the equation By using the first equations of 1.6 and 3.1 , we obtain the parametric representation of 1.6 , a smooth solitary wave solution of valley type and a smooth solitary wave solution of peak type as follows: International Journal of Differential Equations Thus, 1.1 has the following solitary wave solution of valley type and a solitary wave solution of peak type as follows: h 1 , corresponding to H φ, y h, h ∈ −∞, h 1 defined by 1.8 , and system 1.6 has two families of periodic solutions enclosing the center A φ 1 , 0 and A − −φ 1 , 0 , respectively.These orbits determine uncountably infinite many periodic wave solutions of 1.1 see Figures 3 a and 3 b .These orbits have the algebraic equation
3.5
Integrating them along the periodic orbits, it follows that φdφ Substituting φ 2 ψ into 3.6 , we have where ψ M > ψ l > 0 > ψ m .From 3.7 , we have Thus, 1.1 has the following uncountably infinite many periodic wave solutions as follows:
Conclusion
In this paper, we have considered all traveling wave solutions for the coupled Higgs field equation 1.1 in its parameter space, by using the method of dynamical systems.We obtain all parametric representations for solitary wave solutions and uncountably infinite many periodic wave solutions of 1.1 in different parameter regions of the parameter space.
Figure 1 :Figure 2 :
Figure 1: The bifurcation set of 1.6 in b, e -parameter plane.
Figure 3 :
Figure 3: The phase portraits of 2.1 for a < 0. | 1,869 | 2011-07-12T00:00:00.000 | [
"Mathematics"
] |
Continuous layer gap plasmon resonators
We demonstrate both theoretically and experimentally that a gold nanostrip supported by a thin dielectric (silicon dioxide) film and a gold underlay forms an efficient (Fabry-Perot) resonator for gap surface plasmons. Periodic nanostrip arrays are shown to exhibit strong and narrow resonances with nearly complete absorption and quality factors of ~15-20 in the near-infrared. Two-photon luminescence microscopy measurements reveal intensity enhancement factors of ~120 in the 400-nm-period array of 85-nm-wide gold strips atop a 23-nm-thick silica film at the resonance wavelength of ~770nm. Excellent resonant characteristics, the simplicity of tuning the resonance wavelength by adjusting the nanostrip width and/or the dielectric film thickness and the ease of fabrication with (only) one lithography step required make the considered plasmonic configuration very attractive for a wide variety of applications, ranging from surface sensing to photovoltaics. ©2011 Optical Society of America OCIS codes: (250.5403) Plasmonics; (240.6680) Surface plasmons; (260.3910) Metal optics; (190.0190) Nonlinear optics; (300.1030) Absorption; (040.5350) Photovoltaic. References and links 1. L. Novotny and B. Hecht, “Principles of Nano-Optics,” Cambridge University Press, Cambridge, (2006). 2. W. Zhang, L. Huang, C. Santschi, and O. J. F. Martin, “Trapping and sensing 10 nm metal nanoparticles using plasmonic dipole antennas,” Nano Lett. 10(3), 1006–1011 (2010). 3. M. L. Juan, M. Righini, and R. Quidant, “Plasmon nano-optical tweezers,” Nat. Photonics 5(6), 349–356 (2011). 4. A. Weber-Bargioni, A. Schwartzberg, M. Schmidt, B. Harteneck, D. F. Ogletree, P. J. Schuck, and S. Cabrini, “Functional plasmonic antenna scanning probes fabricated by induced-deposition mask lithography,” Nanotechnology 21(6), 065306 (2010). 5. J. N. Farahani, D. W. Pohl, H.-J. Eisler, and B. Hecht, “Single quantum dot coupled to a scanning optical antenna: a tunable superemitter,” Phys. Rev. Lett. 95(1), 017402 (2005). 6. H. A. Atwater and A. Polman, “Plasmonics for improved photovoltaic devices,” Nat. Mater. 9(3), 205–213 (2010). 7. L. Tang, S. Latif, A. K. Okyay, D.-S. Ly-Gagnon, K. C. Saraswat, and D. A. B. Miller, “Nanometre-scale germanium photodetector enhanced by a near-infrared dipole antenna,” Nat. Photonics 2(4), 226–229 (2008). 8. J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, “Plasmonics for extreme light concentration and manipulation,” Nat. Mater. 9(3), 193–204 (2010). 9. P. Bharadwaj, B. Deutsch, and L. Novotny, “Optical antennas,” Adv. Opt. Photon. 1(3), 438–483 (2009). 10. L. Novotny and N. Van Hulst, “Antennas for light,” Nat. Photonics 5(2), 83–90 (2011). 11. D. K. Gramotnev and S. I. Bozhevolnyi, “Plasmonics beyond the diffraction limit,” Nat. Photonics 4(2), 83–91 (2010). 12. A. Kinkhabwala, Z. Yu, S. Fan, Y. Avlasevich, K. Mullen, and W. E. Moerner, “Large single-molecule fluorescence enhancements produced by bowtie nanoantenna,” Nat. Photonics 3(11), 654–657 (2009). 13. T. Søndergaard and S. I. Bozhevolnyi, “Slow-plasmon resonant nanostructures: Scattering and field enhancements,” Phys. Rev. B 75(7), 073402 (2007). 14. S. I. Bozhevolnyi and T. Søndergaard, “General properties of slow-plasmon resonant nanostructures: Nanoantennas and resonators,” Opt. Express 15(17), 10869–10877 (2007). 15. T. Søndergaard and S. I. Bozhevolnyi, “Metal nano-strip optical resonators,” Opt. Express 15(7), 4198–4204 (2007). #151634 $15.00 USD Received 26 Jul 2011; revised 25 Aug 2011; accepted 7 Sep 2011; published 20 Sep 2011 (C) 2011 OSA 26 September 2011 / Vol. 19, No. 20 / OPTICS EXPRESS 19310 16. T. Søndergaard and S. I. Bozhevolnyi, “Strip and gap plasmon polariton optical resonators,” Phys. Stat. Solidi B 245(1), 9–19 (2008). 17. T. Søndergaard, J. Beermann, A. Boltasseva, and S. I. Bozhevolnyi, “Slow-plasmon resonant-nanostrip antennas: Analysis and demonstration,” Phys. Rev. B 77(11), 115420 (2008). 18. J. Jung, T. Søndergaard, and S. I. Bozhevolnyi, “Gap plasmon-polariton nanoresonators: Scattering enhancement and launching of surface plasmon polaritons,” Phys. Rev. B 79(3), 035401 (2009). 19. J. Jung, T. Søndergaard, J. Beermann, A. Boltasseva, and S. I. Bozhevolnyi, “Theoretical analysis and experimental demonstration of resonant light scattering from metal nanostrips on quartz,” J. Opt. Soc. Am. B 26(1), 121–124 (2009). 20. H. T. Miyazaki and Y. Kurokawa, “Squeezing visible light waves into a 3-nm-thick and 55-nm-long plasmon cavity,” Phys. Rev. Lett. 96(9), 097401 (2006). 21. P. Bouchon, F. Pardo, B. Portier, L. Ferlazzo, P. Ghenuche, G. Dagher, C. Dupuis, N. Bardou, R. Haidar, and J.L. Pelouard, “Total funneling of light in high aspect ratio plasmonic nanoresonators,” Appl. Phys. Lett. 98(19), 191109 (2011). 22. T. Søndergaard, J. Jung, S. I. Bozhevolnyi, and G. D. Valle, “Theoretical analysis of gold nanostrip gap plasmon resonators,” N. J. Phys. 10(10), 105008 (2008). 23. G. Lerosey, D. F. P. Pile, P. Matheu, G. Bartal, and X. Zhang, “Controlling the phase and amplitude of plasmon sources at a subwavelength scale,” Nano Lett. 9(1), 327–331 (2009). 24. G. Lévêque and O. J. F. Martin, “Tunable composite nanoparticle for plasmonics,” Opt. Lett. 31(18), 2750–2752 (2006). 25. Y. Chu and K. B. Crozier, “Experimental study of the interaction between localized and propagating surface plasmons,” Opt. Lett. 34(3), 244–246 (2009). 26. Y. Chu, M. G. Banaee, and K. B. Crozier, “Double-resonance plasmon substrates for surface-enhanced Raman scattering with enhancement at excitation and stokes frequencies,” ACS Nano 4(5), 2804–2810 (2010). 27. R. Ameling, D. Dregely, and H. Giessen, “Strong coupling of localized and surface plasmons to microcavity modes,” Opt. Lett. 36(12), 2218–2220 (2011). 28. S. H. Lim, W. Mar, P. Matheu, D. Derkacs, and E. T. Yu, “Photocurrent spectroscopy of optical absorption enhancement in silicon photodiodes via scattering from surface plasmon polaritons in gold nanoparticles,” J. Appl. Phys. 101(10), 104309 (2007). 29. P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B 6(12), 4370–4379 (1972). 30. J. Jin, “The finite element method in electromagnetics,” Wiley: New York, p 429 (1993). 31. F. Wang and Y. R. Shen, “General properties of local plasmons in metal nanostructures,” Phys. Rev. Lett. 97(20), 206806 (2006). 32. J. Beermann, S. M. Novikov, T. Søndergaard, A. E. Boltasseva, and S. I. Bozhevolnyi, “Two-photon mapping of localized field enhancements in thin nanostrip antennas,” Opt. Express 16(22), 17302–17309 (2008). 33. M. G. Nielsen, A. Pors, R. B. Nielsen, A. Boltasseva, O. Albrektsen, and S. I. Bozhevolnyi, “Demonstration of scattering suppression in retardation-based plasmonic nanoantennas,” Opt. Express 18(14), 14802–14811 (2010). 34. J. Beermann, I. P. Radko, A. Boltasseva, and S. I. Bozhevolnyi, “Localized field enhancements in fractal shaped periodic metal nanostructures,” Opt. Express 15(23), 15234–15241 (2007). 35. J. Beermann, A. Evlyukhin, A. Boltasseva, and S. I. Bozhevolnyi, “Nonlinear microscopy of localized field enhancements in fractal shaped periodic metal nanostructures,” J. Opt. Soc. Am. B 25(10), 1585–1592 (2008). 36. J. Beermann, T. Søndergaard, S. M. Novikov, S. I. Bozhevolnyi, E. Devaux, and T. W. Ebbesen, “Field enhancement and extraordinary optical transmission by tapered periodic slits in gold films,” N. J. Phys. 13(6), 063029 (2011). 37. P. J. Schuck, D. P. Fromm, A. Sundaramurthy, G. S. Kino, and W. E. Moerner, “Improving the mismatch between light and nanoscale objects with gold bowtie nanoantennas,” Phys. Rev. Lett. 94(1), 017402 (2005).
Introduction
Subwavelength confinement and enhancement of light in metallic nanostructures due to resonant excitation of surface plasmon polaritons is a rapidly growing research direction in nano-optics and nanophotonics with the major focus onto the development of new efficient approaches for the delivery of light energy to nanoscale objects and single molecules [1].This is because of the unique opportunities offered by plasmonic subwavelength resonators for the design of plasmonic nanosensors, nanomanipulation and near-field trapping techniques [2,3], high-resolution probes for nanoimaging and new information processing approaches [4,5], improved photovoltaics [6], nanoscale photodetectors with significantly enhanced signal-tonoise ratio [7,8], catalysis applications [8], efficient coupling of light energy to nanoscale structures, quantum dots and single molecules [9][10][11].Plasmonic resonators are also expected to result in observation and applications of highly localized and enhanced non-linear effects and near-field spectroscopy, including spectroscopic analysis, imaging and identification of nanoscale amounts of substances and single molecules [8][9][10][11][12].
Theoretical studies of gap plasmon resonators (GPRs) have so far been mainly confined to the structures where the same truncation was carried out for more than one layer in a MIM structure [16,[22][23][24].Meanwhile, it has been pointed out that a finite-width metal strip placed close to a metal surface does ensure efficient reflection of GSPs at strip terminations, forming an efficient GPR [18].One might pursue further this idea by exploring plasmonic configurations, in which finite-size metal patches/strips are placed on a thin dielectric layer supported by a metal underlay (or by a thick metal film atop a substrate).It seems reasonable to expect that such a structure (whose fabrication requires only one lithographic step) would exhibit retardation (Fabry-Perot) resonances due to multiple reflections of GSPs.In fact, numerical and experimental investigations of plasmonic nanoresonators in the form of metal nanodisks on a continuous thin dielectric film on a metal underlay have recently been reported [25,26], but no evidence or analysis of the contribution of GSPs and retardation to the observed resonances has been presented.The obtained results have only been investigated and interpreted under the assumption of the quasistatic resonance in the disk, red-shifted by the near-field interaction between the gold nanodisk and its electrostatic image in the metal substrate (thick gold underlay) [25,26].Another very recent paper has considered plasmonic resonators in the form of gold strips forming a periodic array on a continuous thin dielectric layer on a gold underlay [27].However, the analysis in this paper was rather focused onto the interaction of these structures with an additional microcavity formed by two microscopically separated mirrors [27].The analysis of the strip resonators themselves, their excitation by an incident bulk wave (including absorption and scattering cross-sections), typical local field enhancements, achievable Q-factors, and the retardation nature of the resonances, related to the GSP generation and multiple reflection at strip terminations, has so far been left out.This leaves major gaps in the current physical understanding of GPRs and their potential applications.
In this paper, we introduce and investigate, both theoretically and experimentally, a simple plasmonic resonant structure termed a continuous layer gap plasmon resonator (CL-GPR), for which nanoscale truncation is carried out only for the top layer in a MIM structure (that can be achieved by only a single lithographic step).Spatial extension of the considered GSPs is defined by the width of a metal strip fabricated on a (thin) continuous dielectric layer supported by an underlying thick metal film (underlay).Arrays of CL-GPRs are fabricated and analyzed for different periods and widths of the metal strips featuring strong and narrow resonances.We demonstrate that the predicted and observed resonances occur due to GSPs experiencing multiple reflections from the edges of each of the metal strips.
Structure and numerical methods
The considered CL-GPR structure consists of a continuous SiO 2 film (n d = 1.45) of thickness t sandwiched between a 200nm-thick continuous gold underlay and a gold nanostrip of height h and width w (Fig. 1).The cladding above the structure is assumed to be air (n = 1), and the CL-GPR structure is uniform along the z-direction (Fig. 1).The physical principles for the considered resonator structures are similar to those of a conventional Fabry-Perot resonator.The GSP guided by the gap (filled with SiO 2 ) between the metal strip and the underlay experiences multiple reflections from the terminations (edges) of the strip.The radiation losses at the edges of the strip are due to electric dipole radiation caused by the oscillating opposite charges across the gap.However, because the size of this electric dipole moment is ~t (i.e., in tens of nanometers -much smaller than the wavelength), these radiation losses are significantly weaker than those in the case of the metal strip (IMI) resonator [17].As a result, the typical Q-factors of CL-GPR are expected to be significantly higher than for the IMI resonators.
To model the resonator's optical response, we use the finite-element method (FEM) implemented in the commercial software COMSOL MULTIPHYSICS, and consider a monochromatic plane wave with the amplitude E 0 , polarized along the x-axis (TM-wave) and incident normally onto the CL-GPR structure (Fig. 1).The perfect electric conductor boundary conditions are applied to the unit-cell side walls, thereby mimicking structural periodicity in the x-direction [28].The corners of the gold nanostrip are assumed to be rounded with the radius 5nm, and the permittivity of gold is described by a complex-valued frequency-dependent dielectric function ( ) ε ω obtained by the cubic interpolation of the tabular values [29].The CL-GPR reflectivity is calculated as a function of wavelength by integrating the Poynting vector along a first-order absorption boundary [30] also generating the incident wave and positioned at 800nm above the gold underlay.The calculated reflectivity is normalized to the reflectivity from a uniform smooth gold surface with the thin SiO 2 layer.The accuracy and validity of the obtained results were also confirmed by using a perfectly matched layer, instead of the first-order absorption boundary, to suppress the artificial reflections.Figure 2(a) also demonstrates that decreasing thickness t of the SiO 2 layer results in increasing resonant wavelength.This can be regarded as further confirmation that the predicted resonances are caused by GSPs between the gold strips and gold underlay, because decreasing thickness of the SiO 2 layer (gap width) results in decreasing GSP wavelength, i.e., increasing vacuum wavelength corresponding to the fundamental resonance at a fixed width of the gold strip.However, this simple trend of increasing resonant wavelength with decreasing t does not seem to hold well at larger thicknesses of the SiO 2 layer [solid curves in Fig. 2 (c)].For example, for the CL-GPR array with the period Λ = 400nm [the lower solid curve in Fig. 2(c)], the resonant wavelength tends to be approximately constant for the SiO 2 thicknesses t > 50nm.Furthermore, for the CL-GPR array with the period Λ = 600nm [the upper solid curve in Fig. 2 (c)], the resonant wavelength tends to increase (the resonance is redshifted) for t > 30 nm.This observed behavior of the resonant wavelength can be explained as follows.
Reflectivity dependence on structural parameters
The resonance condition for CL-GPR can be written as for any other Fabry-Perot-type resonator: where λ is the free-space wavelength, n gsp is the effective index for the GSP, m is an integer determining the order of a resonance mode, and ϕ is the phase acquired by the GSP upon its reflection at the resonator terminations (edges of the strip).Considering only the fundamental mode (m = 1) and defining the phase parameter ( ) 1) can be rearranged as: .
If the phase shift ϕ is zero, then ž = 1/2 (like for a perfectly conducting mirror).
However, reflection of a GSP from the terminations of a metal strip in the geometry shown in Fig. 1 is associated with a non-zero phase shift ϕ that depends upon the structural and material parameters.The physical explanation for this non-zero phase shift ϕ is related to extension of the plasmon field beyond the edges of the metal strip.As a result, it can be regarded that plasmon reflections effectively occur not from the edges of the strip, but rather from some 'effective boundaries' of CL-GPR, indicated by the dashed vertical lines in Fig. 1.
The dashed curves in Fig. 2(c) show the calculated dependences of ž on thickness t of the SiO 2 layer.It can be seen that, for both the considered periods of the CL-GPR arrays, the phase parameter ž monotonically decreases from ~0.4 to ~0.2 -0.3 with increasing t from 10nm to 70nm.This is because, as the gap width (thickness of the SiO 2 layer) increases, the GSP propagation constant decreases and becomes closer to that of the bulk wave in the air, which causes the plasmonic field to extend further beyond the edges of the gold strip, thus decreasing ž [dashed curves in Fig. 2(c)] or, in other words, moving further away from the limiting case of perfect mirrors with ž = 1/2.
The comparison of the curves in Fig. 2(c) suggests that the additional phase shift also depends on period of the CL-GPR array, affecting thereby the effective resonator length w'.It is seen that the additional phase shift and effective resonator length w' are larger for a larger period (Λ = 600nm) than those for Λ = 400nm [Fig.2(c)].There are two possible mechanisms that could contribute to this behavior.Firstly, this could be related to interaction between CL-GPRs and the grating plasmonic resonance (the Rayleigh anomaly).If the grating resonance is absent [Fig.2(a)], the resonant wavelength for CL-GPRs monotonically decrease with increasing t [Fig.2(a,c)].If the array period is increased to 600nm, grating resonances appear at ~600nm [Fig.2(b)].In this case, if t is small [e.g., t = 10nm -the solid curve in Fig. 2(b)], then the CL-GPR resonance and the grating resonance are sufficiently far apart, so that they do not interact, and the resonant wavelength decreases with increasing t [Fig.2(b,c)].However, further increase of t shifts the CL-GPR resonance closer towards the grating resonance [Fig.2(b)], and these two resonances may start interacting (similar to the case of metallic nanodisk resonators [25,26]).As a result, the effective resonator length w' may significantly increase [and ž decrease -dashed curves in Fig. 2(c)].It is nevertheless, questionable if the described interaction is indeed significant for the resonances shown in Fig. 2(b), as they seem to be too separated for such an interaction to effectively take place.Therefore, the second possible mechanism that could explain the unusual dependence of w' (and ž) on period of the CL-GPR array is the interaction (coupling) between the neighboring CL-GPRs by means of the generated surface plasmons.GSPs under the gold strips are expected to leak not only into bulk waves in the air, but also into the surface plasmons propagating in the gaps between CL-GPRs.These surface plasmons may result in efficient radiative coupling between the CL-GPR cavities.Because this type of coupling should heavily rely upon the interference effects involving the surface plasmons between the CL-GPR cavities, the coupling efficiency is expected to significantly vary with changing array period.If the array period Λ = 400nm, the wavelength of the surface plasmons between the CL-GPR cavities is close to 2Λ, which means that surface plasmons leaked from the two neighboring cavities are approximately in antiphase, and this may reduce the efficiency of cavity coupling.Increasing the grating period to Λ = 600nm [Fig.2(b)] results in more favorable interference conditions for the surface plasmons, which enhances the coupling between the cavities in the CL-GPR array.
In other words, the observed resonance for Λ = 600nm might be caused by a complex structural mode that is a combination of mutually transforming GSPs under the strips and surface plasmons between the strips, rather than just by GSPs under the individual strips [see also Fig. 6(d)].Reducing thickness t of the SiO 2 layer results in reduced efficiency of leakage of the GSPs into the surface plasmons between the strips, and the structural resonance tends towards that of an individual CL-GPR [including the expected increase of resonant wavelength with decreasing t -Fig.2(c)]).On the contrary, at relatively large values of t, efficient mutual transformation of gap and surface plasmons at the edges of the gold strips leads to the significant deviation of the resonance behavior from the expected trends for an individual CL-GPR.For example, this may cause the unexpected red shift of the resonant wavelength with increasing t, especially at the larger structural period Λ = 600nm ensuring more favorable interference conditions for the surface plasmons [the upper solid curve in Fig.
The obtained results and interpretations can be further illustrated by the distributions of the electric field in CL-GPRs (Fig. 3 and Fig. 4).The presented electric field distributions clearly display the standing wave patterns for the GSP with a node under the center of the gold strip, as expected for the fundamental resonance mode.It is also evident that even for very thin SiO 2 layers the mode fields significantly extend beyond the edges of the gold strip [Fig.3(a) and Fig. 4(a)], which is a confirmation of the above conclusion that the effective resonator length w' appears significantly larger than the width of the gold strip w.If the SiO 2 thickness is increased [e.g., to ~t = 70nm -Fig.3(b) and Fig. 4(b)] the fundamental mode becomes practically delocalized and effectively spreads over the whole CL-GPR array.Importantly, this delocalization is stronger for the larger array period of 600 nm [Fig.4(b)], which is probably the consequence of the discussed coupling of the CL-GPR cavities by means of surface plasmons in the gaps between the gold strips (see above).This delocalization of the fundamental CL-GPR mode in the x-direction at larger values of t is also demonstrated by the dependences of the normalized electric field in the middle of the SiO 2 layer as a function of the x-coordinate [Fig.3(c) and Fig. 4(c)].Despite the discussed delocalization of the fundamental CL-GPR mode (and thus efficient coupling between the CL-GPRs in the array), the electric field is still significantly localized within or near the thin SiO 2 layer [Fig.3(b) and Fig. 4(b)].Thus the delocalization caused by the cavity coupling occurs anisotropically and mainly along the dielectric layer, rather than in the direction perpendicular to it.This is a practically important aspect as it allows localization of the plasmonic resonantly enhanced field within a thin dielectric (semiconductor) layer, which could be a significant opportunity for increasing efficiency of photovoltaic devices and photodetectors.
Linear reflectivity measurements
The experimental investigation of CL-GPRs was conducted using several 30µm × 30µm periodic arrays -each consisting of 5 rows of 5µm long and 53nm thick gold nanostrips [Fig.5(a,b)].The separation between the neighboring rows of strips was 1µm [Fig.5(a)].The strip width w and the period Λ within each row were fixed for the same array, but different for different arrays.The nanostrip arrays were fabricated in a single-step lithography, followed by lift-off applied to a 3nm thick titanium adhesion layer and 50nm thick gold film, on a silicon wafer which was pre-coated by a 100nm thick gold film (gold underlay) and 23nm thick SiO 2 film, deposited by electron-beam deposition and RF-sputtering, respectively.In order to measure the reflectivity spectra, light from a broad band halogen light source was TM-polarized and subsequently focused by an objective, having 60x magnification and 0.85 numerical aperture, onto the 30µm × 30µm array.The reflected light was collected by the same objective, filtered spatially such that only the reflected light from the array was collected, sent through the analyzer (parallel to the polarizer) and finally collected by an optical fiber connected to a VIS/NIR spectrometer.The reflectivity from the array was normalized to the reflectivity from a similar-sized control patch of the continuous thick (100nm) gold underlay covered in the 23nm SiO 2 film with no gold strips forming CL-GPR arrays.The control patch was positioned close to the analyzed CL-GPR arrays to ensure approximately the same structural and material parameters as within the arrays.
The obtained experimental dependences of the measured normalized reflectivity from the CL-GPR arrays on vacuum wavelength are presented in Fig. 6(a,b) for the three different values of w = 85nm, 115nm and 135nm [Fig.6(a)], and Λ = 400nm, 600 nm and 800 nm [Fig.6(b)].For comparison, reflectivity spectra are simulated with the same structural parameters as in the experiment [Fig.6(c,d)].All of the experimental curves for the TM polarization of the incident light display the expected resonant behavior associated with the excitation of GSPs in the gaps between the gold strips and the thick gold underlay, confirming the theoretical predictions for CL-GPRs.The wavelength positions of the experimentally observed GSP resonances [Fig.6 (a,b)] are generally in a good agreement with the theoretical predictions [Fig.6 (c,d)], albeit the experimentally observed resonances are consistently red shifted with respect to the calculated (without any fitting) ones.Fitting the simulated spectra by changing the structural parameters would further improve the agreement, but since the simulated reflectivity spectra contain the main important features of the experimental spectra, we decided not to do so.Figure 6(a) demonstrates that for a fixed array period (Λ = 600nm), the resonant wavelength increases with increasing width of the gold nanostrip [Fig.6(a)].This is in agreement with the understanding that the observed resonances are related to GSPs in the gaps between the gold strips and the gold underlay.The observed red shift for the resonant wavelengths with increasing the array period Λ from 400nm to 600nm [the solid and dashed curves in Fig. 6(b,d)] is in agreement with the theoretically predicted increase of the effective CL-GPR length w' [Fig.2(c)] and the discussed coupling between the CL-GPR cavities and possible interaction between the fundamental GSP resonance and the grating resonance (the Rayleigh anomaly).The reflectivity curve for the TE polarization of the incident light expectedly does not display any resonances in the structure and demonstrate rather featureless behavior at the level of ~96% -Fig.6(a), which once again confirms the validity of the presented theoretical interpretation and analysis of these structures.
Figure 6(a) and Fig. 6(b) also suggest that there are optimal structural parameters such as period Λ and width of the strips w at which the reflectivity of the incident TM radiation from the structure may be close to zero due to nearly 100% coupling of this radiation into the fundamental CL-GPR mode [see the solid curve in Fig. 6(b) and the solid and dashed curves in Fig. 6(a)].In particular, the obtained dependences [Fig.6(a) and Fig. 6(b)] suggest that the separation between the neighboring gold strips may be one of the most important optimization parameters.For example, increasing strip width w at a fixed period Λ = 600nm (i.e., decreasing separation between the gold strips) results in increasing strength of the CL-GPR fundamental resonance [Fig.6(a)].Similarly, decreasing period Λ at a fixed width w = 85nm (i.e., again decreasing separation between the gold strips) also results in increasing strength of the CL-GPR fundamental resonance [Fig.6(b)].Optimal separation between the strips may be expected to ensure the optimal coupling efficiency of the incident radiation into the fundamental mode of CL-GPRs, leading to the nearly 100% absorption of the incident radiation.The Rayleigh anomaly (the grating resonance) is not clearly seen in the experimental spectra for Λ = 600nm, which could be explained by the fact that we used a high numerical locally enhanced fields within a region whose area is equal to ref A that is the area of the focused FH As it is difficult to evaluate array A and the coupling of TPL from the structure to free space without comprehensive theoretical investigations, the enhancement factor α [Eq.( 3)] should be understood as a measure of average enhancement in the structure as compared to smooth gold.In this particular case the enhancement factor α was estimated under the assumption: The resultant experimental dependence of α on wavelength is shown in Fig. 7(d) and demonstrates excellent agreement of the enhancement maximum with the minimum of the measured reflectivity spectrum for the CL-GPR array with Λ = 400nm -both occurring at the same incident wavelength of 760nm [Fig.7(d)].Because TPL is a surface-sensitive technique, the reflectivity minimum coinciding with the enhancement maximum is important as this verifies that the estimated enhancement is caused by the local intense fields in CL-GPRs, rather than by surface roughness or other irregularities.It is also important to note that the assumption array A = ref A (instead of the actual array A < ref A ) results in a likely underestimate of the obtained experimental value of α ~126.This is in agreement with the theoretical predictions of larger intensity enhancements of ~225 [Fig.3(c)].
Conclusions
In summary, we have demonstrated numerically and validated experimentally efficient resonant excitation of GSPs in the CL-GPR structures formed by gold nanostrips (fabricated by single-step lithography) on a continuous SiO 2 film on a thick gold underlay.The possibility of nearly 100% resonant absorption of the incident TM radiation due to its efficient coupling into the fundamental CL-GPR mode was demonstrated theoretically and confirmed experimentally for thin SiO 2 films with the thickness of ~10 -20nm.Though obtained for the retardation-based resonances involving GSPs, the theoretically predicted Qfactors for the fundamental CL-GPR modes were shown to be close to the quasistatic limit [31].The measurements using the scanning TPL microscopy confirmed major local field enhancement in the considered structures.
We have presented detailed physical interpretations of the obtained results, elucidating the crucial role of GSPs in plasmonic structures with finite-size metal strips placed on an extended multilayer structure and developing a general resonator model that can be applied to GPRs.The developed physical interpretation can be used to account for previous observations [25][26][27], providing also simple design guidelines for further experiments.Thus, for example, the localized plasmon resonance observed in a recent study [27] at the wavelength of 1050nm can directly be related to the fundamental GPR mode, and its resonant wavelength can be evaluated (within 5%) using the developed resonator model [Eq.( 2)] with a typical value of the phase parameter ž = 1/3 (taking also into account the GSP dispersion for given configuration parameters and material constants).
The localization of the significant portion of the enhanced local field inside the thin dielectric layer opens excellent opportunities for using the considered CL-GPR arrays to increase efficiency of photovoltaic devices and design photodetectors with enhanced signalto-noise ratio.The CL-GPR structures with the obtained high Q-factors can also form an efficient and cost-effective basis for other plasmonic applications such as nano-optical sensors and surface-enhanced Raman spectroscopy techniques, including single-molecule detection and identification.
Fig. 1 .
Fig.1.The configuration of a CL-GPR unit-cell.A continuous SiO2 film with the thickness t is sandwiched between a continuous 200nm thick gold underlay and a gold strip with the height h and width w; the vertical dashed lines indicate the effective resonator length w' > w, caused by an additional plasmon phase shift acquired by the GSP upon reflection from an edge of the metal strip.The domain above the SiO2 film is assumed to be air.The CL-GPR unit-cell is periodically repeated along the x-axis with period Λ.A plane wave is incident normally onto the structure (along the y-direction) and is polarized along the x-axis (TM polarization).
Figure 2 (
Figure 2(a) and Fig. 2(b) show the numerical dependences of the reflectivity from a periodic array of gold nanostrips forming an array of CL-GPRs with fixed h = 50nm and w = 85nm, two different structural periods Λ = 400nm [Fig.2(a)] and Λ = 600nm [Fig.2(b)], and four indicated thicknesses t of the SiO 2 layer.The major feature of all the curves presented in Fig. 2(a) and Fig. 2(b) is the display of the strongly resonant behavior between ~650 nm and ~850 nm.This behavior is especially evident and strong at smaller thicknesses of the SiO 2 layer.This is because the gaps of smaller thicknesses must result in weaker radiation losses at the
Fig. 2 .
Fig. 2. (a,b) Reflectivity spectra with t as a parameter for the two different CL-GPR array periods: (a) Λ = 400nm and (b) Λ = 600nm; w = 85nm.(c) The dependences of the GSP resonance wavelength (solid curves) and the parameter ž (dashed curves) determining the additional phase shift of the GSPs upon their reflection from the edges of the gold strip on thickness t of the SiO2 layer.
Fig. 3 .
Fig. 3. Typical distributions of the resonant electric field magnitude in the (x,y)-plane for the two different SiO2 thicknesses (a) t = 20nm and (b) t = 70nm in a periodic CL-GPR array with Λ = 400nm.(c) The dependences of the normalized (to the amplitude of the incident wave E0) local electric field in the middle of the SiO2 layer for Λ = 400nm; both the dependences were plotted for the resonant wavelengths of 706nm and 616nm for the respective values of t = 20nm and t = 70nm.
Fig. 4 .
Fig. 4. (a,b) Typical distributions of the resonant electric field magnitude in the (x,y)-plane for the two different SiO2 layer thicknesses (a) t = 20nm and (b) t = 70nm in a periodic CL-GPR array with Λ = 600nm.(c) The dependences of the normalized (to the amplitude of the incident wave E0) local electric field in the middle of the SiO2 layer for Λ = 600nm; both the dependences were plotted for the resonant wavelengths of 746nm and 762nm for the respective values of t = 20nm and t = 70nm.
Fig. 5 .
Fig. 5. (a) Schematic of the 30µm × 30µm CL-GPR array in the form of gold strips on a 23nm thick SiO2 film and 100nm thick gold underlay.(b) A representative scanning-electron microscopy image showing a small section of a CL-GPR array with w = 135nm, h = 53nm, t = 23nm and Λ = 600nm.
Fig. 6 .
Fig. 6.Experimental (a,b) and simulated (c,d) reflectivity spectra from the CL-GPR arrays for the normally incident beam with the TM polarization: (a,c) fixed Λ = 600nm and three different strip widths w, and (b,d) fixed w = 85nm and three different array periods Λ.The dash-and-dot curve in (a) shows the reflectivity spectrum for a normally incident focused beam with the TE polarization.
Fig. 7 .
Fig. 7. Typical FH (a) and TPL (b) images near the corner of a CL-GPR array with w = 85nm and Λ = 400nm; the polarization of the incident beam is in the x-direction.(c) The typical measured x-dependencies of the FH reflectivity and normalized TPL signals along the xdirection across the array edge at x~4µm.(d) The estimated intensity enhancement factor α as a function of the incident wavelength, superimposed with the measured reflectivity spectrum.spot (typically, array A < ref A ), | 7,707 | 2011-09-26T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Light-RCV: a lightweight read coverage viewer for next generation sequencing data
Background Next-generation sequencing (NGS) technologies has brought an unprecedented amount of genomic data for analysis. Unlike array-based profiling technologies, NGS can reveal the expression profile across a transcript at the base level. Such a base-level read coverage provides further insights for alternative mRNA splicing, single-nucleotide polymorphism (SNP), novel transcript discovery, etc. However, to our best knowledge, none of existing NGS viewers can timely visualize genome-wide base-level read coverages in an interactive environment. Results This study proposes an efficient visualization pipeline and implements a lightweight read coverage viewer, Light-RCV, with the proposed pipeline. Light-RCV consists of four featured designs on the path from raw NGS data to the final visualized read coverage: i) read coverage construction algorithm, ii) multi-resolution profiles, iii) two-stage architecture and iv) storage format. With these designs, Light-RCV achieves a < 0.5s response time on any scale of genomic ranges, including whole chromosomes. Finally, a case study was performed to demonstrate the importance of visualizing base-level read coverage and the value of Light-RCV. Conclusions Compared with multi-functional genome viewers such as Artemis, Savant, Tablet and Integrative Genomics Viewer (IGV), Light-RCV is designed only for visualization. Therefore, it does not provide advanced analyses. However, its backend technology provides an efficient kernel of base-level visualization that can be easily embedded to other viewers. This viewer is the first to provide timely visualization of genome-wide read coverage at the base level in an interactive environment. The software is available for free at http://lightrcv.ee.ncku.edu.tw.
Background Current next-generation sequencing (NGS) technologies have provided biologists with an unprecedented scale of genomic data that require analysis [1,2]. Instead of reporting a single expression value for each transcript in array-based profiling technologies, NGS technologies can reveal the read count variation within a transcript at the base level. Such a base-level read coverage provides further insights for analyzing alternative mRNA splicing, single-nucleotide polymorphism (SNP), novel transcript discovery, etc [3,4].
Constructing a base-level read coverage requires alignment of numerous reads on the reference genome. Read alignments are difficult to interpret by human. Many NGS viewers, such as Artemis [5], Savant [6], Tablet [7] and Integrative Genomics Viewer (IGV) [8], have been developed to visualize read alignments into friendly graphic profiles. Some of these NGS viewers can depict a base-level read coverage but only in a small scale; while some of them provide a genome-wide read coverage but not at the base level. To our best knowledge, none of existing NGS viewers can timely visualize a genome-wide base-level read coverage in an interactive environment. The considerable data scale and computational complexity pose a challenge to develop such tools.
To address this challenge, this study proposes an efficient visualization pipeline for NGS data and implements a lightweight read coverage viewer, Light-RCV, with the proposed pipeline. The pipeline consists of four featured designs on the path from read alignments to the final visualized read coverage. The four designs are critical to immediate visualization (i.e. the response time is shorter than 0.5 second) of a base-level read coverage. Light-RCV was implemented as an offline program with web technology. Most researchers prefer not to upload their NGS data to a remote server. An offline program fulfills this requirement. On the other hand, web technology was chosen because it is suitable for embedding in other web-based NGS tools and is familiar to most biologists. Other offline NGS tools also can embed Light-RCV on top of a native browser component, which is supported by major programming languages such as the WebBrowser class in C#, C++, F# and VB, the WebView class in Java (for Android devices), the WebView class (for OSX devices) and the UIWebView class (for iOS devices) in Objective-C.
Results and discussion
This section introduces the interface of Light-RCV and reports the results of a performance evaluation. Finally, the results of a case study are presented. Figure 1 shows the appearance of Light-RCV. The main interface provides only a few controls for specifying a genomic range, which are the most frequently used operations. In internal usability tests, almost every firsttime user could use Light-RCV to visualize NGS data without any instructions. The controls for the compilation stage, which is hidden in the main interface by default, are described in the end of this subsection.
User interface
To see the read coverage of a specific genomic range, the first step is to choose a compiled NGS data with the Sample control (Figure 1(a)). The package of Light-RCV attaches a compiled sample, demo-yeast, for users who have no NGS data at hand to experience Light-RCV. The second step is to specify a genomic range either by a coordinate range (the Coordinate control, Figure 1(c) or by a gene name (the Gene control, Figure 1(d)). This alternative is decided by the Range by control (Figure 1 (b)). In practice, users need not to actually change the Range by control, which changes accordingly whenever users change the Coordinate or Gene control. Light-RCV provides many facilities to make controls behave naturally. For example, the coordinate start and end are automatically switched when the start is larger than the end. Genes can be specified by a gene symbol, name and alias. While typing, users can see the full gene names that fit the current input and select the desired one, namely "auto completion." After the genomic range is selected, clicking the View button (Figure 1(e)) brings the read coverage ( Figure 1(i)) in that range. This can also be done by pressing the Enter key in keyboard. Clicking the Export button (Figure 1(f)) saves the current view to an image file.
Light-RCV shows three read coverages: Total for reads aligned to both positive and negative strands in each position; Positive Strand for reads aligned to the positive strand; Negative Strand for reads aligned to the negative strand. Below the three read coverages is a bar chart for the mismatch rate (%mis) of each position (Figure 1(l)), which is useful for detecting SNPs. The four tracks of information (Total, Positive Strand, Negative Strand and %mis) can be shown/hidden by the legends (Figure 1(j)). Below the four tracks is an annotation track (Figure 1 (k)). When mouse hovers over a position, more detailed information are shown (Figure 1(h)). Note that the composition information is shown when the viewing range is smaller than about 500 bps (depending on the window size). Zooming in can be done by simply dragging in the chart or by the navigation bar (Figure 1(m)). The latter provides intuitive navigational operations (zooming in/out, scrolling, etc).
Finally, users can click the Settings button (Figure 1 (g)) to show the controls for the compilation stage ( Figure 1(n)). To compile an NGS experiment, one has to specify four data: i) Sample ID for identification, which would be shown in the Sample control ( Figure 1 (a)); ii) SAM File, which contains the alignments of NGS reads on a reference genome; iii) Reference File, which is a FASTA file containing the sequence of the reference genome; iv) GTF File, which contains gene coordinates and annotations. The GTF File is optional but is required for many controls such as Figure 1(d) and (k). In Light-RCV, specifying a GTF file is recommended. After specifying the data, clicking the Compile button starts the compilation stage. The status is shown in the Status control and the sample ID is shown in the Sample control after the compilation succeeds.
Performance evaluation
This subsection compares the response time of Light-RCV and three popular offline NGS viewers. Table 1 shows the results, where values in parentheses indicate that the corresponding NGS viewer did not display a base-level read coverage. Savant and IGV do not display read coverages for genomic regions larger than 20 kilo base pairs (kb) and 70 kb, respectively. Tablet shows only summarized read coverages in which the read counts of 500 genomic positions are averaged to a value. These settings/limitations were designed for short response time and good user experience (UX). Light-RCV, on the other hand, aimed to achieve a shorter response time without these limitations.
The first two sections in Table 1 ("Per NGS experiment" and "Per loading an NGS experiment") stand for the time required to prepare an NGS data. The preparation time of Light-RCV was longer than those of other NGS viewers, which is reasonable because Light-RCV moves as many computations as possible to this stage. Notably, the preparation of Light-RCV is conducted only once for an NGS experiment, while other NGS viewers have a startup delay of three to ten seconds whenever users load an NGS experiment. In addition to the startup time, the UX of an NGS viewer relies more on the response time of each genomic range change, which corresponds to "Per visualization of a genomic region" in Table 1. The response time of Light-RCV was less than half second [9] regardless of the genomic range. Strictly speaking, the read coverage in a large genomic range was not at the base level because of the limitation of screen resolution. Light-RCV smartly detected the screen width and returned only necessary data points. In this regard, screen width is a factor of the response time of Light-RCV. The numbers in Table 1 were measured in a 1920x1080 screen, which is a rather big screen in contemporary personal computers. The UX studies have shown that the response is considered immediate when the delay is shorter than half second. Namely, users feel immediate response after specifying a genomic region in Light-RCV. This immediate response time is shorter than those of IGV and Tablet in a genomic region smaller than a kilo base pairs (kb) and that of Savant on a genomic region smaller than 5 kb. The efficiency of the entire process of converting the raw data to the final visualized read coverage can be estimated by amortizing the preparation time to each genomic position (the "Amortized processing time" in Table 1). The amortized time of a 20 kb region in Light-RCV was 0.53s (56.12s÷12.1 Mb×20 kb+0.43s), which is faster than the compared NGS viewers. This explains that the long preparation time of Light-RCV was due to computation arrangement but not performance deficiency. Table 2 shows that Light-RCV consumed the same scale of memory of other NGS viewers, which reveals that the speed of Light-RCV did not require the cost of a large cache. The efficient read coverage construction algorithm is the key to the amortized time. Furthermore, the two-stage architecture and the design of the internal format (which moved most computations to the first stage) enabled an immediate response time. Table 3 shows the distinctive features of Light-RCV in comparison with other NGS viewers. This table, which focuses on Light-RCV's features, demonstrates the uniqueness of Light-RCV but does not prove that Light-RCV is superior over other NGS viewers. Light-RCV lacks some features, such as array data support (expression, copy number, etc.), of other NGS viewers. Table 3 highlights that the largest contribution of Light-RCV is in processing read coverage. Another distinctive feature of Light-RCV is embeddable, which is a benefit of using web technology. To sum up, Light-RCV is light and fast. It focuses on the most important duty of a viewer: visualization. However, this does not indicate that Light-RCV is better or faster than other multi-functional NGS viewers, which may spend time for more analyses than visualization. Light-RCV should be considered a tool that complements other NGS viewers. Researchers can use other NGS viewers to analyze and use Light-RCV to see the data as shown in the following subsection.
Case study
This subsection demonstrates a practical usage flow to show the importance of visualizing read coverage. This case was provided by our collaborative research group, which has used Light-RCV for several months to analyze NGS data.
The operator began the workflow from a read coverage at the whole chromosome level (Figure 2a, mouse Table 1 provides a performance evaluation of the multi-resolution model. This model is a key to achieve both base-level read and whole chromosome read coverage in Light-RCV. However, the reasons why Savant and IGV limited the viewing range of read coverage are unknown. 4 Embeddable is also not visible to regular users but is useful for developers. chrI, about 197M). At this level, one might be attracted by the most sharp peaks (the red circles in Figure 2a). However, these peaks are easily identifiable by almost all analysis tools. In practical analyses, on the other hand, the operator was interested in less obvious peaks (green circles in Figure 2a) and analyzed them individually. In this case study, the area of the solid green circle was chosen. After zooming into the~2M area (Figure 2b), the operator identified a peak with a read count higher than 100 (the green circle in Figure 2b). The operator further zoomed into the peak. In this~47k area (Figure 2c), the transcript annotations were shown. The operator got three clear read coverage peaks (the red circles in Figure 2c) and had some transcript candidates (Nop58, Snord70, RF00575.2,...) according to the annotation track below the read coverage. Cuffdiff (a program in the Cufflinks package) [10], one of the most widely used software for calculating gene expression from NGS data, incorrectly assigned these reads to gene Nop58 since the read coverage peaks were consistent with some exons (the thick lines) of Nop58. With the aid of the visualized read coverage, the operator quickly determined that the read count of Nop58 was a false positive. Many NGS viewers provide automatic analysis. However, for cases that need visualized read coverage, short response time is more important than comprehensive analyses.
The operator then zoomed into the right two peaks (the green circle in Figure 2c) and obtained a~1k area, Figure 2d. At this level, the operator can see the shape of the read coverage. The irregular shape of the right transcript (the green circle in Figure 2d) attracted the operator.
Finally, the journey ended at a 101 bps area (Figure 2e), which reveals two facts. First, the boundaries of the read coverage peak were several bases smaller than those of the transcript RF01182.1. This reveals that the quality of the read alignments (performed by TopHat [11] in this case study) was relatively low at the ends of the transcripts. Second, there is a shorter transcript, Snord11, that overlaps with RF01182.1. The read coverage curve in Figure 2e has a clear decrease near the green circle, which is perfectly matched with the boundary of Snord11. This reveals the difficulty of automatically assigning read counts in areas with overlapped transcript annotations. Manual determination with the aid of a visualized read coverage, is a compromise solution for this problem at present.
In this case study, transcript RF01182.1 and Snord11 are basically the same transcript after the operator queried other databases such as Ensembl [12]. Therefore, this can be easily solved by the operator or, in another words, these is no need to solve. However, if the overlapped transcripts are different, the operator must conduct further analyses. The further analyses are various (case-by-case) and beyond the scope of this study.
In summary, Light-RCV provides a convenient tool for warning operators about these issues. The above workflow heavily relies on manual efforts. Most members of our collaborative research group agreed that the immediate response time of Light-RCV was critical for everyday analyses.
Conclusions
This study proposed four designs on visualizing read counts of each genomic position. This efficient visualization pipeline was implemented as a lightweight read coverage viewer, Light-RCV, which aims at timely visualizing genome-wide base-level read coverages in an interactive environment. It achieved immediate response time and outstanding amortized time.
Methods
The methods section is organized as follows. First, the web technologies used in Light-RCV are described. The second to fifth subsections describe Light-RCV's four distinctive designs in comparison with existing offline NGS viewers.
Web technology
The web technologies used in Light-RCV can be divided as backend and frontend. The backend technologies handle data access/storage, which was developed with PHP and run on an Apache web server. The frontend technologies handle data visualization and user input, which was developed with HTML5, CSS3 and JavaScript and running on browsers. Because of the value of NGS data to individual researchers, Light-RCV was developed as an offline tool that can be run locally on a personal computer without network connection. Light-RCV is compatible with any portable web servers such as USB-WebServer (http://www.usbwebserver.net/en/) and XAMPP (https://www.apachefriends.org/index.html). Users are given guidelines for quickly setting up a local web environment and do not have to upload their NGS data to a remote server.
Read coverage construction algorithm
The input of NGS viewers is a huge set of read alignments, which are usually stored in SAM or BAM files. Such files are not optimized for NGS viewers. The NGS viewers must convert the raw format into an internal format before read coverage visualization. Light-RCV has four featured designs on the path from read alignments to the visualized read coverage. The first one is the read coverage construction algorithm. The first step of converting read alignments (i.e., the start and end positions of reads on the genome) to the read counts of each position is to allocate a big array of the genome size. To process a read at position i of length l, a forloop is then used to increase the elements i, i+1 ... i+l-1 of the big array by one. Light-RCV expedited the time complexity of processing a read alignment from O(l) to O(1) (Figure 3). In Light-RCV, only element i increases by one and element i+l-1 decreases by one when processing a read at position i of length l. Namely, the big array of the proposed read coverage construction algorithm stores the changes of the base-level read coverage before line 6 of Figure 3a. Lines 6 and 7 accumulate the changes in the base-level read coverage. For processing r reads on a genome with g base pairs, the time complexities of original for-loop and Light-RCV's method are O(r×l) and O(r+g), respectively. The additional O(g) of Light-RCV comes from the accumulation step. Generally, r×l is much larger than g to increase coverage. Therefore, Light-RCV is generally much faster than the for-loop approach.
During the read coverage construction, the mismatch information is also extracted. Such information in SAM file looks like "59A15", which means that on the read of 75 bp long, the 60th position is a mismatch. The "A" shows the nucleotide type at the position on the reference genome. Figure 1(l) indicates a mismatch.
Multi-resolution profiles
Since the chromosome size can be up to 100 million base pairs, visualizing a whole chromosome is slow and may crash NGS viewers if the memory arrangement is not carefully designed. To solve this problem, Light-RCV generates multiple profiles (i.e. read coverage curves) with different scales for each chromosome. The first profile is at the base level, in which a data point represents a bp in the genome. This profile is used when the user selects a viewing range smaller than 20000 bps. The size of the second profile is 1/20 that of the first one. A data point of the second profile represents 20 bps in the genome, the values (such as read count of the positive strand) are the maximum value of the corresponding 20 bps. This tract is used when the user selects a viewing range of 20001~400000 bps. The third profile is 1/20 of the second one, so on and so forth. As a result, the number of total profiles is dynamically determined by the chromosome size. This design ensures that Light-RCV shows at least 20000 data points at a time, which is feasible for most screens, while minimizing the required resource and the processing time. Moreover, the storage requirement does not greatly increase. The requirement is only approximately 1.05 (= 1 + 1/20 + 1/400 + ...) times of the original required storage.
Two-stage architecture
In Light-RCV, most computations in the flow of read alignments to a read coverage are moved to a separate stage that is relatively less critical to UX. Namely, the process is split into two stages (Figure 4). Existing NGS viewers do not explicitly separate the two stages, where the entire two stages are conducted after users specify a genomic range. This leads to a considerable waste because read coverage construction is required only once for an NGS experiment but users usually specify a genomic range many times. The first stage, which is denoted the "compilation stage" in Light-RCV, prepares the base-level read coverage of the entire genome to an internal format. The second stage, which is denoted the "visualization stage" in Light-RCV, retrieves and visualizes the desired part when users specify a genomic region. In Light-RCV, the internal format is stored in files with an efficient format described in the next subsection. All following visualization operations start from the internal files, even after the computer reboots. The two stages in Figure 4 do not correspond to the backend and frontend described above. Both stages depend on the backend to access the data (mainly writing in the compilation stage and reading in the visualization stage) and on the frontend to interact with users.
Storage format
To optimize the response time, most computations should be moved to the compilation stage. This computation arrangement is determined by the design of the internal format in Figure 4. The internal file of Light-RCV was designed as the exact format of the base-level read coverage in memory, which is the so-called "memory dump." This design has two important features. First, the data can be retrieved from a specific genomic position without sequentially loading the data before that position. Second, a continuous range of data can be retrieved in one operation without depending on the range size. | 5,066.6 | 2015-12-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Real Exchange Rate Misalignment and Economic Growth in Belt & Road Countries: The Role of Financial Integration
Competitive exchange rate is an important tool for the economies to boost the exports and raise foreign exchange reserves. The strategy is handy when the countries are economically open to the world. BRI (Belt & Road Initiative) countries are financially integrated with each other through the foreign capital. This study seeks to investigate the role of financial integration in the relationship between real exchange rate misalignment and economic growth in Belt & Road countries during 2001-2016 and 2013-2016 by applying the generalized method of moment (GMM). Using grouped and ungrouped samples; the results reveal that the real exchange rate plays a significant and positive role in economic growth. Financial integration also plays a significant and positive role in economic growth. The interaction terms of the real exchange rate and financial integration play a significant and negative role in economic growth. Moreover, several robustness checks like two-stage least squares, fixed and random effect models also confirms the results of the GMM approach. Furthermore, policy recommendation can be drawn from this study, like capital shortage can be adjusted by applying a competitive exchange rate policy.
reserves may be utilized for the imports of raw materials. Similarly, devaluation enhanced the scale of production through the increased volume of sales in foreign markets. In East Asian countries, the inward-oriented policies associated with overvalued currencies obstructed economic growth, while the outward-oriented policies 1 encouraged economic growth and trade of these regions (Cottani, Cavallo, & Khan, 2005;Dollar, 2005).
In short, those countries suffer from a balance of payments constraints and foreign capital shortage that are less financially integrated. Undervaluation could offset the foreign capital shortage by improving the trade balance, release the balance of payments constraints, and ultimately boost investment and economic growth. By contrast, in a country with a high degree of financial integration, foreign capital inflows could eliminate the balance of payments constraints, and hence the investment-enhancing effect of devaluation could be insignificant.
Considering the conflicting results of previous studies; our study aims to investigate the relationship between undervaluation and growth by assuming that financial integration is an essential factor for the successful implementation of exchange rate policy (Dai et al., 2016). Based on Dai, Delpachitra, & Cottrell ( 2016); and Porcile & Lima (2010)'s models that explain the operation of the capital accumulation channel; this study proposes that the competitive RER effect on economic growth assumes to be robust in the BRI countries having low levels of financial integration. Unlike the previous studies that focus on the aggregate sample of countries; this study employ grouped and ungrouped data of BRI countries. We also try to overcome the shortcomings of the previous studies by analyzing the data in various time durations. Moreover, to counter the potential endogeneity issue; this study employs the generalized method of moment (GMM) technique. For robust analysis, we use two-stage least squares (2SLS), estimator. Similarly, we apply time dummy is incorporated to show the impact RER misalignment on growth by fixed and random effect models.
This paper is organized as follows. Section 2 describes the empirical models used to test the underlying hypothesis. Section 3 presents an estimation strategy, regression results, and implications. Section 4 provides the conclusion.
2-Model specification and data 2.1. RER misalignment estimation
There are three different approaches through which one can estimate RER. The 1 st approach is the general equilibrium, 2 nd is the partial equilibrium, and the 3 rd approach is the reduced equation approach. All three methods have their advantages and disadvantages. Since most of the developing countries' data is not readily available; hence, the general equilibrium is not a good approach. However, the approach has a strong theoretical foundation. The general equilibrium approach suffers from the measurement error for developing countries that have lower economic development.
On the other hand, the partial equilibrium approach produces unreliable estimates, and it also depends on many assumptions. The reduced equation is the right approach for the real exchange rate (RER) misalignment. Moreover, the reduced equation approach is a simple approach. However, because of the strong homogeneity assumption in cross-country analysis, the method also has some limitations.
In this paper, we apply the reduced equation method to estimate the RER misalignment index (Edwards, 1988;Elbadawi, Kaltani, & Schmidt-hebbel, 2008;Razin & Collins, 2010). There are three steps in constructing the index. In the first step, we estimate RER. In this step nominal exchange rate (NER) is adjusted for price index. We use GDP deflator and consumer price index as price indices.
Where i and t represent country and year, respectively. Similarly * represents value in the base year. The second step involves the econometric model as shown below; = + + + ! + " # $ + % (2) Where GDPR is a country per capita GDP to US per capita GDP. TOT represents terms of trade; OPEN is exports plus imports value divided by GDP; FDI represents the ratio of foreign direct investment by GDP. The fitted values of RER represent the equilibrium RER. In the third step, we compute the RER misalignment index (RERMIS), i.e., the ratio of RER to equilibrium RER. The RERMIS can be represented as follows; If the value of RERMIS is greater than 1, then there is undervaluation in the exchange rate. Similarly, if the value is less than 1, then there is overvaluation in the exchange rate.
Econometric specification of the economic growth
Through the channel of financial integration, we estimate the relationship between RER misalignment and economic growth. We compute various specifications of the model by introducing interaction terms in the baseline 1 The policies which are expressed in terms of a devalued currency. Chinn & Ito (2008); and RERMIS*FOPEN represent interaction between RER and FOPEN. Similarly, CONTROL indicates the vector of control variables used in this study. The control variables consist of the ratio of gross fixed capital formation to GDP, inflation, and share of government spending to GDP.
We incorporate various measures of financial integration. Firstly we employ FDI as a percent of GDP, and secondly, we have Chinn & Ito (2008) financial openness indicator. Table 1 shows the data and its sources. It can be seen that most of the data is taken from World Bank. However, the financial openness indicator is taken from Chinn & Ito (2008). Rather than merely measuring the capital control intensity, the Chinn & Ito (2008)
Estimation methodology
New growth theories suggest that per capita GDP, exchange rate, and FDI are likely to be endogenous variables.
If this is the case, straightaway panel estimation will yield biased results. Therefore, a test for endogeneity should be applied. If the null hypothesis of exogeneity is rejected, per capita growth, RERMIS and FDI should be treated as endogenous variables and an instrumental variable method -the generalized method of moments (GMM) will be employed (Baltagi, 2005;MW & Enders, 2006). The tricky issue in GMM methodology is to select valid instruments/moments. No rule of thumb exists in instruments' selection. For this purpose, various tricks have been discussed by Murray (2007). The advantage of GMM over other instrumental variables (IV) methods is that a GMM estimator is more efficient than an IV estimator if heteroscedasticity is present, whereas a GMM estimator is not worse asymptotically than an IV estimator if heteroscedasticity is not present. In this study, the lagged values of independent variables have been used as instruments. For robustness check, we apply the 2SLS method. The mentioned estimation techniques in this study are applied on aggregated and disaggregated data (by decomposing global sample into various groups like Asia, Europe, and Africa) for multiple periods (2000-2016, 2013-2016, and 2009-2012). The intention for this disaggregation is to watch the effect of financial integration on BRI countries closely. Similarly, the purpose of dividing the period is to see the impact of BRI initiative on the relationship.
For robust analysis, we use fixed and random effect models by including time dummy on the aggregate data (global sample). For this purpose, we will divide data into two parts, i.e., 2009-2012 and 2013-2016, by assigning 0 to the former and 1 for the later. Comparative to the random and fixed effect models that are not restricted, the pooled model is restricted and assumes that countries are homogeneous. When it is necessary to control for omitted variables that are constant over time but differ between countries, the fixed-effects model is desirable. Since the fixed effect considers heterogeneity and individual country effects, therefore, it gives better estimates than the pooled model.
On the other hand, no individual country effects are assumed in the random effects model. To test this assumption and to compare the fixed and random effects estimates, Hausman (2006) test is employed. Hausman test specifies whether the explanatory variables are correlated with specific effects or not. Hausman test makes sure the selection of the model with consistent results. Random effects are not correlated with the explanatory variables is the central assumption in random effects estimation. The fixed effects model is feasible if the p-value Developing Country Studies www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) Vol.9, No.10, 2019 is significant, i.e., < 5%. On the other hand, if it is greater than 5%, then the most appropriate model is the randomeffects model.
3-Results
The baseline estimation methodology consists of the system generalized method moments (GMM). Table 2 to Table 5 reports the results of the growth equation for different groups from 2001 through 2016. Each table displays the results of eight regressions (GMM and 2SLS). 2SLS is employed to check the robustness of the results. Each regression includes baseline model; and the models with interaction terms and control variables. The column headings 1, 2, 3, and 4 represent various specifications. In this analysis, we exclude some countries based on nonavailability of data (countries' list available in the Appendix, Table A1). Similarly, we exclude the countries in Oceania and South America from the disaggregated analysis; however, we include them in the aggregated sample (global sample). Table 6 shows the robustness results whereby we replace FDI inflow to FDI stock.
Similarly, Table 7 to Table 10 reports the results of the growth equation from 2013 through 2016. The period is selected on the ground that BRI was in its initial phase in the year 2013. Table 11 introduces FDI stock instead of FDI inflow to check the robustness of the results. Table 12 shows the results of fixed and random effect models.
1-Estimation results for different groups of countries during 2001-2016
The results in Table 2 show the global sample during 2001-2016. The results show that RERMIS has a positive and significant impact on growth in all the specifications. The positive coefficients demonstrate that a more competitive RERMIS could result in a higher rate of economic growth. Moreover, as the model captures a linear relationship between RER misalignment and growth, the result also means that a higher degree of overvaluation reduces economic growth. These results are consistent with previous studies (Béreau et al., 2012;Bleaney & Greenaway, 2001;DAI et al., 2016;Gala, 2008;Hausmann et al., 2005;Razin & Collins, 2010). Similarly, inflow of FDI is proxy for the degree of financial integration, and an economy is considered to be highly financial integrated and open if its ratio of FDI inflow to GDP is high and it is supported by numerous studies (Abbas & Christensen, 2010;Mankiw, Romer, & Weil, 1992;Pattillo, Poirson, & Ricci, 2002). Consistent with expectations, openness is growth enhancing. It is supported by (Lucas, 1988;Pattillo et al., 2002). The reason is that greater openness of an economy to the outside world represents improved competitiveness and productivity of the economy, which leads towards better economic performance. The results FDI is positively significant in all specifications except for the last one.
The coefficients of the interaction terms between RER misalignment and FDI were found to be significantly negative at the 1% level in regression 2, which implies that the growth-enhancing effect of a competitive RER is more robust in less financially integrated countries, as expected. Supposing a rise in the level of undervaluation or a decrease in the level of overvaluation, the less financially integrated a country is the more its growth rate increases. Chinn & Ito ( 2008)'s capital openness index replaced FDI in the regressions 3 & 4 to proxy for financial integration. The coefficient of the interaction term is negative and significant at the 1% level. Due to the construction of the capital openness index, a higher value of the index indicates a higher degree of financial integration. The result, therefore, demonstrates a stronger positive impact of a competitive exchange rate on less financially integrated countries. Table 3, 4, and 5 show the results of Asia, Europe, and Africa in BRI during 2001-2016. The results show that RERMIS has a positive and significant impact on growth in Asia's group all the specifications except the specification 4; the results are not significant for Europe although the coefficient of RERMIS is positive, and the results for African group are significant for specification 1 & 3 but insignificant for 2 & 4. Similarly, the coefficients of interaction terms are insignificant for Asia and Africa, but the same is significant for Europe. The signs of the control variables are according to economic theory. For robustness check, we replace FDI stock to FDI inflow, and the results are given in Table 6. The inclusion of FDI stock confirm the results that RERMIS positively and significant influence growth. Similarly, it also confirms the results that interaction terms negatively and significantly influence growth. Table 7 shows the results in the global sample during 2013-2016. It reveals that RERMIS has a positive and significant impact on growth in all the specifications. The coefficient of the interaction term between RER misalignment and FDI found significant negative. The interaction term between RER misalignment and capital openness is also found to be significant and negative. The coefficients of the interaction term between capital openness indicator and RER misalignment is negative and significant at the 1% level. Table 8, Table 9, and Table 10 show the results of Asia, Europe, and Africa in BRI during 2013-2016. The period is selected because the Belt & Road Initiative was in the initial phase in 2013. The results show that RERMIS has a positive and significant impact on growth in Asia's group specifications 1 & 3 but insignificant for specification 2 & 4; the results are not significant for Europe although the coefficient of RERMIS is positive, and the results for African group are significant for all specifications except the specification 4. Similarly, the coefficients of interaction terms are significant for Asia and Africa, but the same is significant for Europe. The Developing Country Studies www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) Vol.9, No.10, 2019 50 signs of the control variables are according to economic theory. For robustness check, we replace FDI stock to FDI inflow, and the result are given in Table 11. The inclusion of FDI stock confirm the results that RERMIS positively and significant influence growth. Similarly, it also confirms the results that interaction terms negatively and significantly influence growth. Table 12 shows the results of fixed and random effect models of the global sample. For all the regressions; fixed effects and random-effects models are estimated. Hausman test suggests that random effect estimation is the proper strategy. Following the economic theory, all the variables have significant with correct signs. We use a robust method, i.e., White's heteroscedasticity-corrected covariance matrix estimator. Without altering the estimates of the slope coefficients, the approach allows for improving standard errors. 1 The result shows that competitive RER misalignment is positively significant for regression 1 and 4. The results for FDI and its interaction term are insignificant. The capital openness indicator is significant in specification 3 and 4. Similarly, the interaction term between capital openness and RER misalignment is significantly negative in specification 3 and 4. Time dummy is significantly positive in specification 1 and 4.
1-Estimation results for different groups of countries during 2013-2016
Overall the results show that RER misalignment has a significant positive impact on per capita growth in Belt & Road countries. Similarly, financial integration (2) -3.14* -3.31* -3.33* -0.64 Note: *, **, and *** represents significance at 1%, 5% and 10% respectively. The coefficient of constant is ignored. Robust standard errors are in parenthesis. Hansen J-stat p-values are in parenthesis.
4-Conclusion
To achieve a certain level of economic growth; the developing countries manipulate their currencies by applying devaluation policies. The findings of empirical literature regarding the relationship between devaluation and economic growth are not consistent. The financial systems in these countries are diversified. This study is an attempt to reconcile the conflicting literature by investigating the effect of undervaluation on economic growth in Belt & Road countries in the presence of financial integration during 2001-2016 and 2013-2016. Since most of the BRI countries are developing and the financial integration is at a low level, there, we put a hypothesis that real exchange rate misalignment will play an important role in economic growth.
Considering the issue of endogeneity, the study relies on the GMM approach. The results of this study are according to the expectations, i.e., the real exchange misalignment is significant and positive. Financial integration also plays an essential and positive role in economic growth. The interaction terms of the real exchange rate and financial integration play a significant and negative role in economic growth. Moreover, several robustness checks like two-stage least squares, fixed and random effect models also confirm the results of the GMM approach.
The present study aims to investigate the role of financial openness in exchange-growth relationship in linear framework. Keeping in view the volatility of exchange rate, the study can be extended to investigate the nonlinearities associated in the exchange-growth relationship. | 4,148.2 | 2019-10-01T00:00:00.000 | [
"Economics"
] |
Phase-sensitive-measurement determination of odd-parity, spin-triplet superconductivity in Sr2RuO4
In this paper, I present a brief summary of the physical properties of Sr2RuO4 and also review our work on the Josephson effect and phase-sensitive measurements of Sr2RuO4. Our results provide strong support to the prediction that this material is an odd-parity, spin-triplet superconductor. I also discuss the eutectic phase of Ru–Sr2RuO4 and comment on several unresolved issues regarding Sr2RuO4.
Introduction
Superconductivity occurs because of the formation of Cooper pairs. The Fermi statistics of electrons demands that the wave function of a Cooper pairs is asymmetric with respect to interchanging the individual coordinates (r 1 and r 2 ) and spins (s 1 and s 2 ) of the two electrons in the pair. For a superconductor with a translational invariance, the wave function of a Cooper pair can be written as a function of the relative coordinate, r(= r 1 − r 2 ), or the corresponding wave vector, k, and the two spins. The interchange of the two electrons becomes inverting r or k plus the interchange of the spins. Therefore, under the inversion transformation, the Cooper pair wave function has to be either an even function (even-parity), with the total spin S = 0 (spin-singlet), or an odd function (odd-parity), with the total spin S = 1 (spin-triplet), to ensure that the total wave function is asymmetric with respect to particle interchange. As a result, superconductors can be divided into two categories-even-parity, spin-singlet superconductors or odd-parity, spin-triplet superconductors [1]. Superconductors falling into these categories can be further classified if additional symmetries exist. For example, in the presence of a rotational symmetry, the angular momentum quantum number, l, is a good quantum number. The spatial part of the wave function can be expressed by a spherical harmonic function. For even-parity, spin-singlet superconductors, l can be zero or an even number, leading to the familiar s-wave (l = 0) or d-wave (l = 2) superconductors. For odd-parity, spin-triplet superconductors, l can be 1 (p-wave) or 3 (f-wave) and, so on. For a crystalline superconductor, its pairing symmetry is classified according to the point group because the continuous rotational symmetry does not exist [1]. The superconducting order parameter is a scalar for spin-singlet and a vector for spin-triplet superconductors. A convenient form for the latter is the so-called d-vector, used in describing superfluid 3 He [2]. The magnitude of the d-vector represents the superconducting energy gap, while its direction is that perpendicular to the plane into which spins of the Cooper pairs are aligned.
Except for a few unusual classes of superconducting materials, most superconductors discovered to date, including all elemental superconductors, are s-wave superconductors. It is known that the pairing symmetry of the high-T c superconductor is predominantly d-wave. Superfluid 3 He is the first experimentally established p-wave (charge neutral) superconductor [1]- [3]. The occurrence of superfluidity in 3 He is driven by the attractive interaction in the p-wave channel through spin fluctuations and their feedback effects [2]. Heavy fermion superconductors were the first serious candidate for electronic spin-triplet superconductivity [1,4,5]. The strong Coulomb repulsion in these heavy fermions appears to exclude significant attractive interaction in the s-wave channel and therefore an s-wave pairing. However, even though it is widely accepted that non-s-wave pairings prevail in heavy fermion superconductors, consensus on the exact pairing symmetry for any heavy fermion superconductor has proven to be difficult to establish [6]. After the discovery of superconductivity in Sr 2 RuO 4 [7], which has an intrinsic superconducting transition temperature (T c ) of 1.5 K [8], and the subsequent prediction [9,10] that the superconducting pairing symmetry in Sr 2 RuO 4 is p-wave, it quickly became a leading candidate for establishing the long-sought spin-triplet superconductivity.
The p-wave pairing state in Sr 2 RuO 4 was predicted based on the observation of some key properties of this material. Rice and Sigrist [9] suggested that the apparent S = 1 correlation in Sr 2 Ir 1−x Ru x O 4 , the ferromagnetic (FM) ordering in SrRuO 3 (a material closely related to Sr 2 RuO 4 ) and, most importantly, similarities between the normal-state characteristics of Many experiments have been carried out on Sr 2 RuO 4 to address its pairing symmetry. The Josephson effect and phase-sensitive measurements provided particularly strong support to the picture that Sr 2 RuO 4 is a chiral p-wave superconductor. In this paper, I will present a review of the Josephson effect and phase-sensitive measurements carried out so far, focusing on work carried out primarily at Penn State. I will also summarize the physical properties of Sr 2 RuO 4 and discuss briefly the eutectic phase of Ru-Sr 2 RuO 4 and several unresolved issues regarding Sr 2 RuO 4 .
Physical properties of Sr 2 RuO 4
Originally synthesized in 1959 [11], Sr 2 RuO 4 was rediscovered as a substrate material for the growth of single crystalline films of high-T c superconductors [12] and as a possible 4d transition metal oxide counterpart of the 3d high-T c cuprates in the search for novel superconductors [13]. Superconductivity in Sr 2 RuO 4 was discovered in 1994, 35 years after the initial synthesis. This discovery generated intense interest in the superconducting materials community because Sr 2 RuO 4 is isostructural with the first high-T c cuprate, (La, Ba) 2 CuO 4 , and the only transition metal oxide with a layered perovskite crystal structure that becomes superconducting without the presence of Cu. (So far, Sr 2 RuO 4 is the only known superconducting ruthenium oxide.) Therefore, it was hoped that the study of Sr 2 RuO 4 could provide fresh insight into the mechanism of superconductivity in high-T c cuprates.
It became clear that Sr 2 RuO 4 is very different from high-T c cuprates. In the normal state, Sr 2 RuO 4 is a paramagnetic metal, showing the familiar Fermi liquid behavior rather than the exotic non-Fermi liquid behavior well known in high-T c cuprates. Structurally, Sr 2 RuO 4 is a quasi-2D material featuring a periodic stacking of a perovskite RuO 2 layer separated by two rock-salt SrO layers. Electronically, Sr 2 RuO 4 is one of the most anisotropic metals known, with a ratio of out-of-to in-plane resistivities >200 at room temperature and >800 right above T c . The Fermi surface of Sr 2 RuO 4 consists of three nearly cylindrical sheets [14,15], including the γ band originating from the d x y , and the α and the β bands from the d x z and d yz orbitals. The α band is hole-like, while the β and γ bands are electron-like. Nuclear magnetic resonance (NMR) measurements suggest that magnetic fluctuation in Sr 2 RuO 4 is orbital dependent [16], which is also apparent from magneto-thermoelectrical measurements [17]. Strongly orbitaldependent normal state properties should lead to orbital-dependent superconductivity, as suggested theoretically [18,19].
Sr 2 RuO 4 is the n = 1 member of the Sr n +1 Ru n O 3n +1 Ruddelsden-Popper (R-P) homologous series (figure 1). The n = ∞ member of the series, SrRuO 3 , is a 3D ferromagnet with a T FM c = 160 K [20]. The n = 5, 4 and 3 members, Sr 6 Ru 5 O 16 , Sr 5 Ru 4 O 13 and Sr 4 Ru 3 O 10 , are all layered ferromagnets [21]. Sr 4 Ru 3 O 10 , the most 2D ferromagnet (T FM c ≈ 100 K) in this R-P series, was found to exhibit some unusual magnetoelastic [22] and magnetothermoelectric [23] properties. The n = 2 member, Sr 3 Ru 2 O 7 , is a paramagnetic metal, showing a low-temperature metamagnetic transition at a field between 5 and 8 T, depending on the field orientation [24]. Bulk measurements showed that both FM and antiferromagnetic (AFM) fluctuations are present in Sr 3 Ru 2 O 7 [25,26], which may be a reflection of incommensurate magnetic fluctuation (IMF) peaked around (±1/2, 0)(π/a) and (0, ± 1/2)(π/a), as revealed in the inelastic neutron scattering (INS) measurements [27]. The end compound in the R-P series, Sr 2 RuO 4 , features broadly enhanced magnetic fluctuation. INS measurements revealed the presence of IMF with peaks in the susceptibility found around (±2/3, ± 2/3)(π/a) [28]. The IMF appears to originate from the 1D d x z and d yz bands, based on local density approximation calculations [28]. The evolution of the magnetic property within this R-P series appears to suggest that the tendency towards an FM ordering has to be fully suppressed in order for the p-wave superconductivity to emerge, which appears to differ from the commonly held belief that FM fluctuation would help spin-triplet pairing.
Experimental evidence available for unconventional, non-s-wave superconductivity in Sr 2 RuO 4 is abundant. Early NMR and nuclear quadruple resonance (NQR) 1/T 1 studies of Sr 2 RuO 4 yielded no Hebel-Slichter coherent peak [29], offering evidence for non-s-wave superconductivity in this material. Measurements on Pb-Sr 2 RuO 4 -Pb junctions showed an unexpected drop in the temperature dependence of the critical current [30], I c (T ), suggesting that Sr 2 RuO 4 is a type of superconductor different from Pb. The occurrence of superconductivity in Sr 2 RuO 4 was found to be extremely sensitive to the presence of impurities [31], again suggesting that this material cannot be an s-wave superconductor. (It is well known that only s-wave superconductivity can survive a substantial amount of disorder. For non-s-wave superconducting pairing, the elastic mean-free path has to be larger than the zero-temperature superconducting coherence length, which is possible typically only when the T c is high, as in the case of high-T c superconductors.) Evidence for unconventional, non-s-wave superconductivity was also found in an elastic neutron scattering study that revealed a square rather than a triangular vortex lattice [32], and in tunneling measurements showing the existence of Andreev surface bound states [33,34].
The presence of nodes in the superconducting order parameter is another hallmark of the unconventional superconductivity. Measurements of the thermodynamic, magnetic and transport properties in clean, single crystalline Sr 2 RuO 4 at temperatures much lower than its T c showed power-law behavior [35]- [39], suggesting the presence of a large residual density of states (DOS) in the zero-temperature limit. These results would have been a firm indication of the presence of nodes in the superconducting order parameter if Sr 2 RuO 4 were a single-band superconductor. However, the presence of multiple bands across the Fermi surface makes it possible that the band-dependent gap is responsible for the large DOS found well below T c . On the other hand, specific heat measurements with the orientation and magnitude of the magnetic field varied were used [40] to evaluate the node structure in the superconducting parameter of Sr 2 RuO 4 , leading to the suggestion that the superconducting order parameter in Sr 2 RuO 4 is band dependent with vertical line nodes [40].
Experiments also suggest that Sr 2 RuO 4 features a time-reversal symmetry breaking superconducting state, which can be either chiral p-or d-wave. The earliest experimental evidence for such a superconducting state in Sr 2 RuO 4 came from the observation of a spontaneous magnetic field in muon spin rotation measurements [41] (a result confirmed by other groups [42,43]), a large nonzero Kerr rotation below T c in high-resolution polar Kerr effect measurements [44] and a non-symmetric quantum interference pattern in in-plane Josephson junctions of Pb-Sr 2 RuO 4 [45]. Within the Rice-Sigrist scenario, the only pairing state with such a property is that of − 5 , shown in table 1. The spin configuration of the superconducting state in Sr 2 RuO 4 was first probd by the NMR Knight shift [46] measurements with the magnetic field applied along an in-plane direction, showing that the spin susceptibility is a constant across the T c . Polarized-neutron scattering measurements [47] led to the same conclusion. NMR measurements on Sr 2 RuO 4 with the field aligned along the c-axis are difficult because its c-axis upper critical field is very small. However, recently NMR Knight shift measurements [48,49] were carried out on Sr 2 RuO 4 with a c-axis field as small as 200 G (far below the c-axis critical field). Interestingly, the measurements did not reveal the expected drop in the spin susceptibility below T c . The result was interpreted in a d-vector rotation scenario-the d-vector is along the c-axis in zero field but rotated to an in-plane direction in a field as small as 200 G-that preserves the spin-triplet pairing picture. This interpretation requires a small spin-orbital coupling. On the other hand, first-principle studies [50] appear to suggest a strong, rather than weak, spin-orbital coupling in Sr 2 RuO 4 . Therefore, the implication of the NMR Knight shift results from Sr 2 RuO 4 needs to be explored further.
Superconducting quantum interference device (SQUID)-based phase-sensitive measurements [51] probe the variation in the phase of the superconducting order parameter in real or reciprocal space. These measurements on Sr 2 RuO 4 showed that the phase of the superconducting order parameter changes by π under a 180 • rotation, demonstrating explicitly a p-wave pairing in this superconductor. Combining with the observation of a selection rule of Josephson coupling between Sr 2 RuO 4 and an s-wave superconductor [52], the pairing in Sr 2 RuO 4 must be that of − 5 , listed in table 1.
Josephson coupling between an s-and a p-wave superconductor
Josephson coupling between two superconductors through a tunnel barrier is linked directly to the overlapping integral of the superconducting order parameters of the two superconductors. Therefore, Josephson coupling between an s-and a p-wave superconductor is possible only because of the spin-orbital coupling [53]- [55]. In the absence of the spin-orbital coupling, spin 6 is a good quantum number; spin-singlet and spin-triplet wave functions are orthogonal with one another with zero overlapping of the wave functions. The Josephson coupling between the s-and the p-wave superconductor would be strictly zero without spin-orbital coupling. In the case of a superconducting weak link, however, Josephson coupling between an s-and a p-wave superconductor is still possible, even without the spin-orbital coupling [56]. The Josephson current density between an s-and a p-wave superconductor through a planar tunnel junction with translational invariance along the junction plane is predicted to be where s and d(k) are order parameters for s-and p-wave superconductors, respectively, k is the wave vector, n is the interface normal vector and ... FS denotes an appropriate average over the Fermi surface. According to equation (1), Josephson coupling between an s-and a p-wave superconductor through a planar tunnel junction is highly orientation dependent. In particular, if the tunnel junction plane is perpendicular to the direction of the d-vector, which will make n parallel to d(k), J s is strictly zero, even though the specific k-dependence of the d-vector is not known. This general conclusion provides a convenient way to check whether Sr 2 RuO 4 is indeed consistent with possessing a p-wave pairing, as predicted by theory. Similar to s-wave superconductors, the strength of the Josephson coupling between an s-and a p-wave superconductor can be measured by the value of I c R N , where I c is the critical current and R N is the normal-state junction resistance. The Josephson coupling between two dissimilar s-wave superconductors at T = 0 is given by the Ambegaokar-Baratoff (A-B) formula [57], where 1 and 2 are the superconducting energy gaps, and the function K is the elliptic integral of the first kind. This result suggests that the critical current of an s-wave Josephson junction is determined only by junction resistance R N and the superconducting energy gaps of the two superconductors, independent of the details of the junction. In the case where the two superconductors have the same gap, , we have To calculate the value of I c R N for a Josephson junction between an s-and a p-wave superconductor, one needs to know the precise functional forms of d(k) and the tunneling matrix entering equation (1). In the A-B calculation for s-wave Josephson junctions, the s-wave order parameter is assumed to be independent of k. (However, even for s-wave superconductors, the superconducting order parameter can, in principle, have anisotropy in k-space with its sign unchanged on the Fermi surface.) The integration of the tunneling matrices then results in R N , making the A-B value of I c R N depend only on the values of the energy gaps of the two superconductors involved. For a Josephson junction involving a non-s-wave superconductor, similar convenience is not available, making analytic results for J s difficult to obtain. Numerical calculations [56,58,59] for J s between an s-and a p-wave superconductor have generally yielded values that are much lower than the corresponding A-B value of I c R N , assuming that the p-wave superconductor is an s-wave with an energy gap that is the same as the maximum gap of the p-wave superconductor.
Selection rule
Experimentally, the Josephson coupling between an s-wave superconductor In and Sr 2 RuO 4 was measured in c-axis and in-plane junctions prepared by pressing freshly cut pure In wire directly onto a cleaved ab or polished ac face of Sr 2 RuO 4 [52], as shown schematically in figure 2. The Josephson coupling for the in-plane In/Sr 2 RuO 4 junctions was found to be finite ( figure 3(a)). The temperature dependence of the critical current in this Josephson junction, an example of which is shown in figure 3(b), was found to vary from sample to sample, probably as a result of an inhomogeneous junction. Other consequences of the junction inhomogeneity will be discussed in section 4.3.
None of the pressed In junctions prepared on the cleaved ab face was found to show a finite supercurrent. The absence of a finite supercurrent does not seem to be due to a suppressed I c R N because the value of I c R N for in-plane tunnel junctions was found to be large (see below). It is known that superconductivity is suppressed on the ab face of Sr 2 RuO 4 because of the rotation of RuO 6 octehedra [59]. However, the number of RuO 2 layers that are subject to the suppression of superconductivity should not be more than a few unit cells, based on elastic energy considerations. The s-wave superconductor (S) In, the normal (N) region near the ab surface and superconducting bulk Sr 2 RuO 4 single crystal (S ) should form an SNS planar Josephson junction by the proximity effect. A finite Josephson coupling between the two superconductors is expected, as long as the N-region is within a few times the normal coherence length in the clean limit, ξ N . This length, where v c F is the Fermi velocity along the c-axis (v c F = 1.4 × 10 4 m s −1 [15]), is ξ N ≈ 80 nm for Sr 2 RuO 4 at T = 0.3 K, the lowest temperature for this set of measurements. This number is larger than the c-axis lattice constant c = 1.28 nm by almost two orders of magnitude. It is unlikely that the N-layer formed on a freshly cleaved Sr 2 RuO 4 single crystal can be so thick that the supercurrent in c-axis In/Sr 2 RuO 4 junctions vanishes. On the other hand, the above result of a selection rule of the Josephson coupling between In and Sr 2 RuO 4 is consistent with the d-vector of Sr 2 RuO 4 aligned along the c-axis, which is the − 5 state within the Rice-Sigrist scheme (table 1), according to equation (1).
Strength of the Josephson coupling
The strength of the Josephson coupling can be measured by the I c R N value, as pointed out above. However, even for two s-wave superconductors, experimentally the A-B limit given in equations (2) or (3) often represents only the upper limit of the Josephson coupling if the bulk gap values are used. The typical interpretation of this observation is that the superconducting energy gaps may be suppressed on the surface, causing I c to fall below the bulk values. For in-plane In/Sr 2 RuO 4 junctions, no acceptable value for the energy gap of Sr 2 RuO 4 is available. However, if one estimates the gap values from T c using the BCS result, = 1.76k B T c , an A-B limit of 0.516 mV would be obtained for an In/Sr 2 RuO 4 junction in the zero-temperature limit, assuming that Sr 2 RuO 4 is an s-wave superconductor. At T = 0.3 K, the values of I c R N were found to be 0.10 mV for an In/Sr 2 RuO 4 sample, shown in figure 3(b), a substantial fraction of the A-B limit. Here, the value was taken as the junction resistance measured at the T c of In because the normal-state junction resistance is slightly temperature dependent. Similar results were observed in other samples (table 2). Because the sign changes in d(k) tend to reduce J s while ... FS is carried out in equation (1), as pointed above [56,58,59], a substantial fraction of the A-B limit for Josephson coupling between an s-and a p-wave superconductor is not expected. This issue is yet to be resolved.
Magnetic field dependence
For a Josephson junction, I c will oscillate as a function of magnetic field applied along the junction plane with a period of H 0 = 0 /A, where A is the junction area given by W (λ 1 + λ 2 ), W is the dimension of the junction, and λ 1 and λ 2 are the penetration depths of the two superconductors. A Fraunhofer diffraction pattern is also expected in I c (H ), and its amplitude drops quickly after the first few periods. For an In-plane In-Sr 2 RuO 4 junction, λ 1 = 64 nm and λ 2 = λ ab = 180 nm. If the size of the In-Sr 2 RuO 4 junction is similar to that of the In wire, ∼1 mm, then H 0 will be a fraction of a Gauss. In figure 4, the value of I c for an In/Sr 2 RuO 4 (a) (b) (c) junction is plotted as a function of H . However, neither a Fraunhofer pattern nor a regular field modulation in I c (H ) was observed. The above observation is consistent with the behavior of a non-uniform Josephson junction with a size of the order of microns. This result is not surprising, however, given that a mechanically polished ac face of Sr 2 RuO 4 possesses unavoidable mechanical damage and therefore disorder. Nevertheless, equation (1) is valid [55] so long as the translational invariance can be maintained over the superconducting coherence lengths in the zero-temperature limit, ξ(0), which are 66 and 3.3 nm for Sr 2 RuO 4 (along the in-and out-of-plane directions, respectively) and 44 nm for bulk In. Therefore, the selection rule result discussed above appears to be unaffected. An interesting observation is that the Josephson current does not vanish until 400 G, larger than the minimal field required for the d-vector to rotate in Sr 2 RuO 4 , 200 G [48,49]. If the d-vector does rotate to the in-plane direction at a field as small as 200 G, suggested by the NMR Knight shift measurements, one would expect the Josephson coupling to vanish or undergo a change at a characteristic magnetic field near or below 200 G. While the data shown in figure 4 appear to show a feature between 150 and 200 G, more data from a systematic study are needed to draw a conclusion.
Phase-sensitive experiments on bulk Sr 2 RuO 4
In a phase-sensitive experiment, the phase rather than the amplitude of the superconducting order parameter is determined as a function of crystal orientation. Geshkenbein, Larkin and Barone (GLB) proposed the original phase-sensitive-measurement idea, including both the SQUID and the tricrystal configurations, in the context of detecting p-wave superconductivity in heavy Fermion superconductors [60]. Leggett rediscovered the SQUID-based phasesensitive measurements for d-wave superconductors in the high-T c research [61]. For high-T c superconductors, the symmetry of the order parameter was determined unambiguously only after phase-sensitive experiments were carried out [62,63]. Similarly, phase-sensitive measurements of Sr 2 RuO 4 are needed in order to settle the pairing symmetry of Sr 2 RuO 4 experimentally.
Our approach to phase-sensitive measurements of Sr 2 RuO 4 is to build a phase-sensitive toolkit, as illustrated in figure 5. According to equation (1), the Josephson currents flowing through the two Josephson junctions prepared on the opposite faces of a spin-triplet superconductor (the two junctions have normal vectors in n and −n, respectively) of the GLB SQUID (figure 5(c)) are out of phase with one another by 180 • , the intrinsic phase difference of the superconducting order parameter after a rotation by π. Similarly, for a corner or a same-face SQUID, the intrinsic phase difference will be 90 • or 0 • , respectively. Experimentally, however, a single junction on a side or a corner of the crystal will work for the same purpose. Single Josephson junctions have smaller effective area for modulating magnetic flux than the SQUID, making the device less susceptible to flux trapping (see below). The expected experimental signatures for the three possible pairing symmetries in the quantum interference pattern are shown in table 3. It should be emphasized that the flux threaded in the SQUID loop (or the Josephson junction) is the total amount of flux, different in general from the applied flux (see below).
To prepare the phase-sensitive experiment toolkit for Sr 2 RuO 4 , we used single-crystal based structures (figure 6) since superconducting epitaxial films of Sr 2 RuO 4 were not (and are still not) available [64]. The s-wave superconductor, Au 0.5 In 0.5 , with a T c of ∼0.5 K was used because it wets Sr 2 RuO 4 crystal well. In addition, it possesses a long superconducting coherence length (ξ(0) >150 nm) that favors the establishment of a Josephson coupling with Sr 2 RuO 4 . All of our Au 0.5 In 0.5 -Sr 2 RuO 4 junctions (SQUIDs) feature a naturally formed tunnel barrier.
Several important experimental issues were encountered in carrying out the phase-sensitive measurements [51]. Firstly, the preparation of the Josephson junction or SQUID structures requires mechanical polishing of the crystal in order to have the desired Josephson junction planes. Because of the extreme sensitivity of superconductivity to disorder in Sr 2 RuO 4 , mechanical polishing clearly has a negative effect on superconductivity, consistent with the observation that only a small fraction of our samples were found to display a measurable supercurrent. Secondly, Ru islands associated with the eutectic phase of Ru-Sr 2 RuO 4 with an onset superconducting transition temperature of nearly 3 K (see below) were commonly found in a polished crystal surface. These Ru islands are not desirable to avoid unnecessary complications. To do so, every polished surface was carefully inspected under an optical microscope. Thirdly, the applied flux, ext , used to modulate the critical current, I c , needs to be close to the total flux ( ) threading the SQUID. Finally, additional complications can result from the formation of domains of k x + ik y and k x − ik y , associated with a chiral pairing state, such as the − 5 state, listed in table 1. The last two issues will be discussed in more detail below. To ensure that the total flux threading the SQUID or the junction ( ) is as close as possible to the applied flux ( ext ), it is useful to note that the total flux is given by = ext + ind + trap + bkgd , where ind is the induced flux, trap the trapped flux and bkgd the background flux. Clearly, ind , trap , and bkgd all need to be minimized. Among them, bkgd is the easiest to deal with-it can be minimized by careful magnetic shielding. ind is determined by the sample size and the asymmetry of the SQUID. For an opposite-face SQUID sample, ind = L I circ = L(I 1 − I 2 ), where I circ is the circulating current in the loop and L is the self-inductance [65]. Early SQUID-based phase-sensitive experiments [66,67] on high-T c superconductors relied on an extrapolation of R(H ) measured at currents above I c to obtain the values at zero current, an approach criticized by others [68] and apparently abandoned in favor of the corner junction experiments [69,70]. We adopted an alternative approach by showing that I c ( ext = 0) corresponds to a minimum close to T c of the SQUID. In this case, I circ → 0, so that = ext + ind → ext , if trap = 0. It should be noted that trap , which could make conventional SQUIDs mimic the behavior of an unconventional SQUID [71]- [73], can take up an arbitrary value. In general, the fluxoid quantization requires that 2πm = φ 1 − φ 2 + (2π/ 0 )( ext + ind + trap ), where m is an integer (or 0), and φ 1 and φ 2 are phase drops across the two junctions in the SQUID. Clearly, φ 1 and φ 2 , the two degrees of freedom of the system, can adjust themselves to accommodate any arbitrary trap . Trapped flux leads to an asymmetric envelope of the I c (H ), which was used to determine whether flux is trapped in our SQUIDs. We found that warming up and cooling down the sample in zero field slowly appeared to help prepare a trapped-flux-free state in our SQUIDs.
In the case that Sr 2 RuO 4 features the − 5 state listed in table 1, it is important that a procedure is found so that our samples involve only a single or known number of domains. However, domains have not been observed directly in Sr 2 RuO 4 . Therefore, a safe strategy would be to work to prepare a single-domain state assuming that domains do exist. A possible way for the domains to form is through a slight variation in either the superconducting transition temperature or the sample temperature that leads to nucleation of superconducting regions in isolated spots as the temperature is lowered. To minimize this tendency, we cooled the sample at a very slow (∼hours), computer-controlled rate in our experiment, which should help us to minimize the chances of having multiple domains as well as the trapped flux. Obviously, further work is needed to detect and control the formation of domains.
Data taken close to T c of the GLB SQUIDs (figure 7) demonstrated that the phase of the order parameter changes by π after 180 • rotation, providing compelling evidence that Sr 2 RuO 4 is indeed an odd-parity, spin-triplet superconductor [51]. These results, together with an unexpected onset T c as high as 3 K, which was suggested to originate in regions on the Sr 2 RuO 4 side of the Ru/Sr 2 RuO 4 interface, based on the anisotropic properties of this so-called 3-K phase. Enhanced superconductivity is known in other interface systems, such as at the atomically sharp interface between Ag and Ge, even though neither Ag nor Ge is superconducting [75]. However, the enhanced superconductivity near the Ru-Sr 2 RuO 4 interface was still a surprise, given that the occurrence of superconductivity in Sr 2 RuO 4 is sensitive to disorder, including structural imperfections. The interface between two different materials would also appear to function as a pair breaker, even if no disorder is present, which would tend to suppress rather than enhance superconductivity.
In addition, questions about the nature of the 3-K phase, such as whether its pairing symmetry is also p-wave, were raised. To address these issues, we carried out tunneling measurements, which may be the most effective way to address these issues, given that the 3-K phase occurs only near the interface region. On the surface of a non-s-wave superconductor, the intrinsic orientation dependence of the phase of the order parameter results in mid-gap Andreev bound states and an associated zero-bias conductance peak (ZBCP) in the tunneling spectrum [76]- [79], as seen in high-T c cuprates [80,81]. Andreev surface bound states were also detected in the bulk phase of Sr 2 RuO 4 [33]. However, the fitting to the data may be problematic because the superconducting energy gap obtained from the fitting is unreasonably large.
We prepared break tunnel junctions by cleaving an Sr 2 RuO 4 single crystal containing a Ru island (figure 9) [34]. In this sample configuration, the tunneling current will be dominated by conducting channels near the Ru islands, as shown by a ZBCP persisting up to 3 K. The ZBCP marks the presence of ABSs, suggesting that the eutectic phase is an unconventional, nons-wave superconductor. Theoretically, a p-wave state with horizontal line nodes was found to yield a single peak near the zero bias voltage [82], which can actually fit our data quantitatively. On the other hand, the presence of horizontal nodes appears to have been ruled out by magnetic field-dependent specific heat results [40]. More work is needed to resolve this inconsistency.
The 3-K phase may offer insights into the mechanism of superconductivity in Sr 2 RuO 4 . Sigrist and Monien [83] developed a phenomenological theory for the 3-K phase and argued that superconductivity will nucleate in the interface region between Ru islands and the bulk (a) (b) Figure 10. Schematics illustrating the nature of superconductivity in the 3-K phase. The Ru region is shown in yellow and the bulk Sr 2 RuO 4 is shown in green, with light green indicating the 3-K phase region. The physical boundary between the Ru island and Sr 2 RuO 4 is assumed to be at x = 0 (x-axis is along the horizontal direction). The two alternative pictures are illustrated (see text).
Sr 2 RuO 4 at a temperature above the bulk T c ( figure 10(a)). It was shown that energetic considerations favor a p-wave with a line node parallel to the normal vector with positive and negative lobes parallel to the interface (say, k y -state). As the temperature is further lowered, the second component will emerge, forming a time-reversal symmetry breaking state (k x ± ik ystate). However, our recent tunneling measurements [84] did not reveal a proximity induced p-wave superconducting energy gap in the interior of the Ru island, suggesting an alternative picture in which the 3-K phase originates in the region somewhere away from the interface, as shown in figure 10(b).
Discussion
Several issues regarding superconductivity in Sr 2 RuO 4 remain unresolved. For example, all states listed in table 1 have an isotropic (full) gap. The observed power-law behaviors described above can be attributed to horizontal/vertical nodes in the superconducting order parameter [85]- [89], or orbital-dependent superconductivity (ODS) [18,19]. In the former case, the vertical nodes would imply that the order parameter dependence is independent of k z , whereas the horizontal nodes require that the order parameter depends on k z . In this regard, magnetic field-dependent specific heat measurements seem to rule out k z dependence of the d-vector featuring the presence of horizontal nodes [40]. However, the presence of vertical nodes appears to be inconsistent with the tunneling results [34,82]. Josephson tunneling measurements, which are currently under way, can provide an independent check on the k z dependence of the d-vector.
Even though most experiments suggest that Sr 2 RuO 4 is a chiral p-wave superconductor represented by the − 5 state, the phase diagram obtained with a precisely aligned in-plane magnetic field [90] does not agree with the theoretical expectations for a chiral p-wave [91]. Furthermore, domains corresponding to k x + ik y or k x + ik y states and domain walls between them have not yet been observed directly experimentally as pointed out above [92]. Possible sizes of the domains inferred indirectly from various measurements vary greatly [93], adding to the confusion over this issue. 16 The mechanism of superconductivity of Sr 2 RuO 4 is not yet understood. Models based on FM fluctuation [94], AFM fluctuation [95], spin-orbital coupling [96] or Hund's rule coupling [10] have been proposed. The systematic tests on the proposed mechanisms, which are yet to be carried out, are needed. The eutectic phase of Ru-Sr 2 RuO 4 , which may provide insight into the mechanism of superconductivity in Sr 2 RuO 4 because of the unexpected enhancement of T c , needs to be studied further.
Conclusion
In this brief review, I have summarized our Josephson tunneling and phase-sensitive measurements of Sr 2 RuO 4 . This work represents an important step towards the establishment of an electronic counterpart of superfluid 3 He in Sr 2 RuO 4 featuring odd-parity, spin-triplet superconductivity. Further work is needed to determine the precise symmetry form of the superconducting order parameter, and to establish the mechanism of superconductivity in this material.
Acknowledgments
The work summarized here on Josephson tunneling and phase-sensitive measurements was carried out in my lab at Penn State in the past decade or so in collaboration with the groups of Y Maeno, R Cava and Z Mao, who provided us with high-quality single crystals. Individuals who performed the relevant measurements presented here include postdoctoral research associates R Jin, Z Mao and Z Long, former graduate student K D Nelson, who did the bulk of the work on the phase-sensitive measurements, former undergraduate student B W Clauser and current graduate students Y A Ying and R J Myers. Other students H Wang, N E Staley and C P Puls provided help in many ways. We benefited greatly from collaboration with the groups of J Kirtley, K Hasselbach and K Moler, who performed low-temperature scanning SQUID measurements that provided us with insight into our phase-sensitive measurements. | 8,888.6 | 2010-07-01T00:00:00.000 | [
"Physics"
] |
The Disorder of Mitochondrial Dynamics Causes Deafness By Promoting Macrophage-Mediated Hair Cell Death
Mitochondrial dynamics are essential for maintaining the physiological function of the mitochondrial network, and the disorder of mitochondrial dynamics leads to neurodegenerative diseases. However, how mitochondrial dynamics affects auditory function in the inner ear remains unclear. FAM73a and FAM73b are mitochondrial outer membrane proteins that mediate mitochondrial fusion. Here, we found that FAM73a or FAM73b deciency resulted in elevated oxidative stress and apoptosis of hair cells. Additionally, mitochondrial ssion also causes an increase expression of IL-12 in basilar membrane macrophages through accumulating IRF1. As a bridge between innate and adaptive immune responses, hyperproduction of IL-12 further promoted the polarization of Th1 and tissue damage. Our data highlighted an important role of mitochondrial dynamics in maintaining cochlear homeostasis and hair cell survival. Mitochondrial dynamics not only disturbed hair cell function, but also induced the disorder of immune responses.
Introduction
Hearing loss is one of the most common human diseases. According to 2018 World Health Organization report, there are about 466 million people with hearing disability globally, as 5% of the world's total population (https://www.who.int/en/). More than 0.1 percent newborns suffer from hearing loss, which seriously affects a child's communication, quality of life, and educational attainment [1] . Among all deafness patients, approximately 90% suffer from sensorineural hearing loss (SNHL), which is mainly caused by the loss or damage of cochlear hair cells (HCs) and the degeneration of spiral ganglion neurons after HCs injury. [2] SNHL can be caused by genetic and environmental factors, and genetic defects are responsible for at least 50% of congenital or childhood hearing loss [3] . The identi cation of the genes associated with hearing loss and their underlying mechanisms remains as an urgent but challenging task.
Mitochondria are essential for the physiological function and survival of HCs, and its dysfunction is involved in the pathogenesis of hearing loss under noise exposure, ototoxic drug treatment, or aging [4] .
The disorders of mitochondrial ssion and fusion can lead to abnormal morphology and dysfunction, which is implicated in neurodegenerative diseases [5] , but the role of mitochondrial dynamics in auditory function has not been extensively investigated. Some evidences have ever shown that dynamin-related protein-1 (Drp-1) is necessary for mitochondrial ssion and plays a central role in mitophagy [6,7] . Reduced Drp-1 expression and mitophagy are involved in age-related hearing loss [8] . Similarly, optic atrophy 1 (OPA1) controls mitochondrial inner membrane fusion, and the R445H mutation in the Opa1 causes SNHL [9] . Studies have shown that OPA1 is expressed in HCs and spiral ganglion neurons, and its mutation-caused deafness is due to a functional change in the unmyelinated auditory nerve endings and not to a pathological change in HCs [10][11][12] . FAM73a and FAM73b are mitochondrial outer membrane proteins that are required for mitochondrial outer membrane fusion [13][14][15] . The de ciency of FAM73a and FAM73b greatly disrupted the mitochondrial morphology, thus it led to higher levels of reactive oxygen species (ROS) and signi cant reductions in ATP. However, it remains unclear whether the absence of FAM73a and FAM73b also contribute to deafness similar to Drp-1 and OPA1.
In recent years, immune response and in ammation have also been recognized as one of important pathophysiological factors in HC injury [16] . As a main executor in innate immune system of the cochlea, macrophages are widely distributed in the basilar membrane, the osseous spiral lamina, the lateral wall of the cochlea, and in spiral ganglions under physiological conditions [17] . Adaptive immunity is also considered to be involved in the cochlear immune response [18] . CD4 + T-cells can in ltrate into the basilar membrane and collaborate with macrophages [19] . The resident macrophages of the basilar membrane are activated and produce proin ammatory cytokines, and the monocytes in peripheral circulation also enter the basilar membrane and transform into macrophages in response to noise exposure, ototoxic drug damage, and age-related degeneration [20][21][22][23] . In the models of cochlear injury, HC injury is considered to be the initiator of the immune response, and is su cient to regulate macrophage recruitment into the basilar membrane through fractalkine signaling [24] . The inhibition of the activation and recruitment of macrophages is protective against HC injury caused by ototoxic drugs [25] . FAM73b is involved in macrophage polarization, and regulate the production of IL-12 in response to damage, which further results in increased production of IFN-γ in T cells [26] . Therefore, we investigated the functional changes in macrophages and T cells in the basilar membrane of the cochlea by using Fam73a and Fam73b knockout (KO) mice, indicated by the expression of in ammatory cytokines and the associated essential signal pathways.
In this study, we found that FAM73a and FAM73b were expressed in the mitochondria of HCs, and Fam73a and Fam73b KO resulted in HC loss and the destruction of stereocilia structures. Deletion of FAM73a and FAM73b increased the oxidative stress level and apoptosis in HCs. Along with the genetic ablation of FAM73a and FAM73b, the numbers of macrophages and CD4 + T cells in the cochlear basilar membrane were increased, and the expression of the in ammatory cytokines IL-12 and IFN-γ were obviously elevated. After activated by endogenous damage, mitochondrial division in macrophages led to increased expression of Parkin, which degraded the mono-ubiquitinated CHIP protein and stabilized the protein level of downstream transcription factor IRF1, thereby promoting the secretion of IL-12. IL-12 further directly promotes the production of IFN-γ in T cells, which is involved in HC injury. Our data highlighted the role of FAM73a and FAM73b-mediated mitochondrial dynamics in auditory function and clari ed the effect of their disorders on HC survival by directly disturbing HC function and indirectly regulating macrophage polarization.
Mice and genotyping
Genotypic identi cation of transgenic mice was carried out according to the method described in published literature [26] . All animal experiments were performed in accordance with the protocols approved by the Animal Care and Use Committee of Southeast University and were consistent with the National Institutes of Health Guide for the Care and Use of Laboratory Animals.
Auditory brainstem response (ABR)
A TDT System III workstation running SigGen32 software (Tucker-Davis Technologies, USA) was used to record ABRs as previously described [27,28] . The mice were anesthetized by intraperitoneal injection of 0.01 g/ml pentobarbital sodium (100 mg/kg body weight). After deep anesthesia, three ne-needle electrodes were inserted under the skin of the mouse at the vertex of the skull, behind the tested ear, and on the back near the tail. The mice were then put into a soundproof room for the ABR test. The TDT hardware and software (BioSig and SigGen) were used to generate the acoustic signals and to process the responses. The ABRs were elicited with tone bursts at 4, 8, 12, 16, 24, and 32 kHz. The tests were performed at a 5 dB interval from 90 dB to 10 dB at each frequency with a gradually decreasing intensity, and ABR thresholds were recorded as the lowest sound intensity at which a stable wave III could be seen and repeated.
Immunohistochemistry
The basilar membranes of the newborn mouse cochleae were dissected with microsurgery forceps and incubated with 4% paraformaldehyde for 1 h at room temperature (RT), while the basilar membranes from ossi ed cochleae were carefully dissected with a microsurgery scalpel after incubating in 4% paraformaldehyde and 0.5 M EDTA overnight. Whole mounts of the basilar membrane were then blocked with 10% heat-inactivated donkey serum, 1% bovine serum albumin (BSA), and 1% Triton X-100 in PBS (0.1 M phosphate buffer, pH 7.2) for 1 h at RT. The samples were incubated with primary antibodies diluted in 5% heat-inactivated donkey serum, 1% BSA, and 10% Triton X-100 overnight at 4°C. The tissues were washed three times with PBST (PBS and 1% Triton X-100) and further incubated at RT for 1 h with secondary antibodies (Alexa Fluor 647 or 555 or 488, Invitrogen) diluted in 0.1% BSA and 0.1% Triton X-100. Finally, the tissues were again washed with PBST three times and mounted on a slide. A Zeiss LSM700 confocal microscope was used to take images. qRT-PCR Total RNA from the brain, from different parts of the cochlea, and from whole cochleae was extracted with ExTrizol Reagent (Protein Biotechnology, PR910), and the reverse transcription from mRNA to cDNA was carried out using cDNA Synthesis kits (Thermo Fisher Scienti c, K1622) according to the manufacturer's instructions. The qPCR was performed using an Applied Biosystems CFX96 qPCR system (Bio-Rad, Hercules, CA, USA) and the SYBR Green (Rox) qPCR Master Mix (Roche Life Science, 04913850001). Validated primers were designed for targeted DNA or mRNA sequences (Table1). The qPCR protocol was an initial denaturing step of 15 s at 95°C followed by 40 cycles of 15 s denaturation at 95°C, 60 s annealing at 60°C, and 20 s extension at 72°C. The expression of mRNA was normalized using the values of Gapdh, and the results were analyzed using the comparative cycle threshold (ΔΔCt) method.
Western blot
Cochleae from two mice were dissected in cold PBS and lysed with 150 ml RIPA Lysis Buffer (Medium, Hangzhou Fu De Biological Technology) and 3 µl 50× protease inhibitor cocktail (Hangzhou Fu De Biological Technology) at 4°C. The primary antibodies were detected by HRP-conjugated secondary antibodies using the ECL detection system. The western blot bands were semiquanti ed using ImageJ software, and the band densities were normalized to background and the relative optical density ratio was calculated by comparison to the reference protein GAPDH or β-actin.
Immunoprecipitation (IP)
Cochleae from two mice were lysed with 150 ml RIPA lysis buffer (Medium, Hangzhou Fu De Biological Technology). CHIP was isolated with antibodies targeting CHIP (Santa Cruz Biotechnology, sc-133066), and Protein A+G was used to capture the antibodies. The ubiquitinated proteins were detected by western blot using anti-ubiquitin antibodies (Santa Cruz Biotechnology, sc-8017).
Fluorescence intensity measurement
Different groups of cochleae were xed, labeled with the same solution, and processed in parallel. The tissue was photographed with confocal microscope using the same parameters. The immunolabeling intensity of antibodies was measured using ImageJ software in which a region of interest was drawn and the mean gray value intensities were measured from 4 or 5 sections per cochlea.
The number and morphology of basilar membrane macrophages
Macrophages were distributed throughout the basilar membrane, and exhibited dendritic, irregular, amoeboid, and spherical morphology. They were identi ed with surface markers F4/80 and Iba1 as have been used in previous studies [29] . To assess the number of macrophages in the apical, middle, and basal turns of the basilar membrane, F4/80-labeled macrophages were counted under the confocal microscope. Images at 20× magni cation taken from each turn of the cochlear whole mounts were used as representative gures. To measure the size of macrophages, ImageJ software was used to outline the membrane boundaries of each cell and calculated the area contained in the drawn region. Five typical cells were selected from each turn of tissue specimen, and their average area represented the size of apical, middle, and basal turns macrophages in the basilar membrane of each individual cochlea.
Macrophage phagocytosis
PHrodo ® zymosan bioparticles conjugate (Invitrogen, P35365) was used to evaluate the phagocytic activity of macrophages. The uorescence of the pHrodo ® dye was activated when the zymosan bioparticles were ingested and exposed to a more acidic PH within the acidic phagocytic vacuoles. Because the extracellular pH is more alkaline, bioparticle uorescence was absent outside the cell. The cochleae were dissected from the skull and placed in live cell imaging solution (A14291DJ, Invitrogen). The membrane labyrinth was opened from the top of the cochlea to remove the basilar membrane, modiolus, and the lateral wall tissue, thereby exposing the inner surface of lateral wall of scala tympani at the basal turn of the cochlea. Then the basal turn was divided into several pieces so that the cochlear bone wall could be at on the slide. PHrodo ® zymosan bioparticles ® conjugate incubated the collected tissues for 90 minutes at 37°C, and then they were rinsed three times for 5 min each using live cell imaging solution. 4% buffered formalin xed the collected tissues for 4 hours and then EDTA decalci ed at 4°C for 1 day. Subsequently, the primary antibody against F4/80 and the appropriate secondary antibody incubated the collected tissues to visualize macrophages.
Drug administration
Clophosome ® -A -Clodronate Liposomes (LCCA) (FormuMax, F70101C-A) provide superior e ciency of macrophage depletion. We intraperitoneally injected mice with LCCA every other day from P30 to P60 at 70 mg/kg. We collected and dissected the cochleae and measured HC loss and the number of macrophages when the drug administration was completed.
Statistical analysis
Microsoft Excel and GraphPad Prism software were used for statistical analyses. All of the data are presented as mean ± SD, and all experiments were repeated at least three times. Two-tailed, unpaired Student's t-tests were performed. P-values <0.05 were considered signi cant, and the level of signi cance is indicated as *P < 0.05, **P < 0.01, ***P < 0.001. All statistical tests were justi ed as appropriate, and the data met the assumptions of the tests. The variance was similar between the statistically compared groups.
Results
FAM73a and FAM73b are expressed in cochlea To determine whether FAM73a and FAM73b are expressed in the cochlea, qRT-PCR was performed in the brain and cochlear tissue of postnatal day 3 (P3) WT mice. Fam73a and Fam73b were indeed expressed in the basilar membrane, lateral wall, and modiolus of the cochlea, although their expression in these regions are not as high as that in brain tissue (Fig. 1A,D). To study their expression pattern during postnatal development, we further examined the expression of Fam73a and Fam73b by qRT-PCR in different age groups. The levels of Fam73a and Fam73b decreased from P14 but was still presented in P30 mice (Fig. 1B,E). Western blots also veri ed the proteins levels of FAM73a and FAM73b in the cochleae of P14 and P30 WT mice (Fig. 1C,F). We then immunolabeled FAM73a and FAM73b with Myosin7a in the whole-mount basilar membrane. Confocal imaging at P3 revealed that FAM73a and FAM73b were expressed in the cytoplasm of HCs rather than in the nucleus (Fig. 1G-H). In P14 and P30 WT mice, FAM73a and FAM73b were stably expressed in the cytoplasm of HCs (Fig. 1I-J). These results suggest FAM73a and FAM73b were expressed in the cochlea at all stages of development.
FAM73a and FAM73b KO mice show progressive hearing loss
To evaluate whether FAM73a and FAM73b affect auditory function and HC survival, we constructed Fam73a and Fam73b KO mice. We rst con rm the e ciency of FAM73a and FAM73b deletion in the cochlea in P30 KO mice by qRT-PCR ( Supplementary Fig. 1A,C) and immunolabeling ( Supplementary Fig. 1B,D). We then evaluated the hearing levels in these mice using the ABR test. In P30 KO mice, ABR thresholds were signi cantly increased at middle and high frequencies after Fam73a deletion, while Fam73b de ciency led to elevated ABR thresholds at all frequencies ( Fig. 2A,G). A similar high-frequency hearing loss was observed both in these two KO mice, but Fam73b deletion caused an earlier and more severe hearing loss at low and middle frequencies than those in Fam73a deletion. Although inner HC loss was not observed, outer hair cell (OHC) loss was consistent with the trend of hearing loss. Scattered OHC loss was seen in the apical, middle, and basal turns both in these two KO mice, and the most severe damage was found in the basal turn (Fig. 2B,C,H,I). However, Fam73a KO mice showed no signi cant difference of OHC loss in the apical turn (Fig. 2B,C). In P60 KO mice, the absence of FAM73a and FAM73b resulted in severe hearing loss (Fig. 2D,J), and the OHC loss in each turn was more severe than that of P30 KO mice (Fig. 2E,F,K,L). We also observed the ultrastructure of the stereocilia in P30 KO mice with a scanning electron microscope and found that the knockout of Fam73a and Fam73b led to the degeneration of the stereocilia (Fig. 2M,N). It was noteworthy that the OHC loss caused by FAM73b de ciency was stronger than that of FAM73a de ciency, and the degree of OHC loss and hearing loss at P60 were worse than those at P30.
Lack of FAM73a and FAM73b causes apoptosis of HCs by enhancing oxidative stress
Previous evidences revealed that absence of Fam7a and Fam73b results in elevated level of oxidative stress [13] . To clarify how FAM73a and FAM73b cause injury on HCs, we thus evaluated the changes of mitochondrial ROS in the HCs of Fam73a and Fam73b KO mice by detecting the oxidative stress markers 3-nitrotyrosine (3-NT) and 4-hydroxynonenal (4-HNE) . In P21 mice, confocal images showed that the levels of 3-NT and 4-HNE were already increased in Fam73a or Fam73b KO mice (Fig. 3A,C,E,G). Quanti cation of the immuno uorescence intensity con rmed a signi cant increase in these two KO mice compared to WT control (Fig. 3B,D,F,H). Western blots also showed increased expression of 3-NT both in Fam73a and Fam73b KO mice compared to controls (Fig. 3I,J). As known, increased oxidative stress can activate apoptosis signaling pathways and induce cell death, so we performed TUNEL staining to identify apoptotic HCs in P21 Fam73a and Fam73b KO mice. Immuno uorescence staining showed that TUNELpositive cells were found in these two KO mice but not in WT mice (Fig. 4A,B,D,E). The qRT-PCR results showed that the expressions of proapoptotic marker genes, including Bax, Casp3, Casp8, Casp9, and Apaf1 was signi cantly higher in the KO mice than those in WT mice (Fig. 4C,F). These results indicate that lack of FAM73a or FAM73b increases oxidative stress in HCs, and subsequent high ROS levels cause apoptosis in HCs.
FAM73a and FAM73b de ciency provokes innate and adaptive immune responses in the basilar membrane Previous studies have shown that macrophages and CD4 + T cells are recruited into the basilar membrane of the cochlea after HC damage [19,24] . FAM73b de ciency has been reported to be essential for the polarization of type 1 macrophages under Toll-like receptor stimulation by controlling mitochondrial morphology switching from fusion to ssion [26] . Therefore, we investigate whether Fam73a and Fam73b in immune cells are also involved in the development of deafness. We measured the in ltration of macrophages and CD4 + T cells in the basilar membrane in Fam73a and Fam73b KO mice. Strikingly, the numbers of macrophages in the basilar membrane were increased in P30 Fam73a and Fam73b KO mice compared to WT control (Fig. 5A,B,E,F). In P60 KO mice, the numbers of macrophages were signi cantly increased in the apical, middle, and basal turns (Fig.5 C,D,G,H). In P30 WT mice, resident macrophages exhibited a slender body with multiple length dendritic projections. Macrophage in Fam73a KO mice showed a similar morphology (Fig. 5A), but their bodies were enlarged in the Fam73b KO mice (Fig. 5E). In P60 WT mice, the macrophage bodies were larger than those in P30 mice in the middle and basal turns (Fig. 5C,G). Fam73a KO mice also exhibited a similar phenotype of macrophage morphology. In the Fam73b KO mice, most of the macrophages transformed into giant irregular or spherical shapes (Fig. 5C,G). The quantitative results con rmed our above observations that the average size of the macrophages in both KO mice was signi cantly larger than in WT mice (Fig. 5J,M), suggesting an activating morphological transformation in these two KO mice. This difference in morphologic transformation between the two KO mice might be due to the different degree of HC damage, because Fam73b KO mice showed earlier and more severe HC damage. Furthermore, we also found that the number of CD4 + T cells in P45 KO mice was signi cantly increased in the whole length of the basilar membrane compared to WT mice (Fig. 5I,K,L,N), indicating an adaptive immune response occurred in two KO mice lines. These results support the hypothesis that lack of FAM73a and FAM73b provokes innate and adaptive immune responses in the basilar membrane.
Lack of FAM73a and FAM73b promotes the expressions of IL-12 and IFN-γ in basilar mebrane.
Because macrophage is a resident immune cell of the basilar membrane , it can affect pathological changes in HC. The phagocytic activity of macrophages is involved in the removal of unwanted or damaged cochlear tissues under both normal and pathological conditions. We thus examine whether phagocytic function of Fam73a and Fam73b KO macrophages distributed on the lateral wall of the scala tympani at the basal turn changed using a confocal microscope (Fig. 6A-H). Although no signi cant difference in number of macrophages were found between the WT and Fam73a KO mice (Fig.6B), that in Fam73b KO mice was signi cantly decreased (Fig.6F), implying that more macrophages are recruited into the basilar membrane due to a severe HC damage in Fam73b KO mice. However, phagocytic activity of these macrophages displayed no difference compared to WT control, suggested by a similar uorescence uptaking using pHrodo ® uorescent bioparticles (Fig.6C,G). The quantitative results con rmed these observations (Fig.6D,H). Collectively, these ndings demonstrated that FAM73a and FAM73b de ciency did not regulate the phagocytic capacity of cochlear macrophages.
Because the different recruitment of macrophages and T cells were observed between KO and WT mice, we further evaluated whether FAM73a and FAM73b de ciency triggered cochlear in ammatory activity. We examined the transcriptional levels of proin ammatory cytokines (Il12a, Il12b, Ifng, Il6, Il1b, and Tnfa), and anti-in ammatory activity (Il10) in the cochlea by qRT-PCR. In P60 Fam73a KO or Fam73b KO mice, the mRNA levels of Il12a and Ifng were signi cantly increased both in Fam73a and Fam73b KO mice compared with WT group (Fig. 7A), while Il10 was signi cantly decreased (Fig. 7D). No difference in mRNA levels of Il6 and Il1b was observed, suggesting Fam73a and Fam73b de ciency only induced a partial phenotypes of M1 subtype. Consistently, M2-type macrophage marker Cd206 and Arg1 was signi cantly decreased in P60 Fam73a or Fam73b KO mice compared with WT mice (Fig. 7B, 7E). The western blot analysis con rmed the downregulation of ARG1 protein expression both in these two KO mice (Fig. 7C,F). To further verify the enhanced expression of IL-12, we immunolabeled IL-12 in the macrophages of the basilar membrane. We found that the expression of IL-12 in macrophages was upregulated in these two P60 KO mice (Fig. 7G-J). To determine whether the antigen presentation function of macrophages was enhanced by the increased expression of IL-12, we immunolabeled MHCII -an antigen-presenting protein -in macrophages. Interestingly, we also found that the expression of MHCII in Fam73a or Fam73b KO macrophages was increased compared with WT mice (Fig. 7K-N). These ndings suggest that mitochondrial dynamics regulates IL-12 expression and capacity of antigen presentation in macrophages, which further promotes the expression of IFN-γ in CD4 + T cells.
To prove the role of macrophages in HC damage in the cochlea, LCCA was intraperitoneally injected into KO mice to deplete macrophages from P30 to P60. We found that the numbers of macrophages were signi cantly decreased in the basilar membrane of P60 KO mice after LCCA injection (Fig. 8A-B,E-F). After deletion of macrophages by LCCA, HC loss was obviously reduced in P60 Fam73a and Fam73b KO mice compared to the untreated groups ( Fig. 8C-D,G-H). These results suggest macrophages play an important role in HC injury in Fam73a and Fam73b KO mice.
FAM73a and FAM73b control macrophage function via regulating Parkin-CHIP-IRF1 signaling A previous study reported that mitochondrial ssion caused by Fam73b ablation increases the expression of Parkin and IRF1, which further promoting the production of IL-12 [26] . To determine the role of Parkin-IRF1 signal in the cochlea of Fam73a and Fam73b KO mice, we rst performed qRT-PCR to determine the mRNA level of Irf1. The expression of Irf1 was not signi cantly different in P60 Fam73a or Fam73b KO mice compared with WT control (Fig. 9A,D). However, western blots showed a signi cantly upregulated expression of IRF1 in the cochlea of these two KO mice ( Fig. 9B-C,E-F). To further verify the increased expression of IRF1 in macrophages in the basilar membrane, we co-stained IRF1 and Iba1 in the whole mount tissue. Confocal images showed an obviously enhanced expression of IRF1 in KO macrophages compared with WT littermate (Fig. 9G-H,I-J). Monoubiquitinated CHIP promoted the degradation of IRF-1, thus we also found the protein expression level of mono-ubiquitinated CHIP was signi cantly reduced in P60 Fam73a or Fam73b KO mice compared with WT mice (Fig. 10A-D). Lastly, qRT-PCR assay (Fig. 11A,D) and western blots ( Fig. 11B-C,E-F) showed that mRNA and protein levels of Park2 was signi cantly upregulated in P60 these two KO mice. To further con rm that the expression of Parkin was upregulated in macrophages of basilar membrane, we immunolabeled Parkin and Iba1 in macrophages. The increased expression of Parkin was observed by confocal images of macrophages from KO mice ( Fig. 11G-H,I-J). Together these results suggest a similar regulating mechanism between peripheral macrophages and those in the cochlea via Parkin-CHP-IRF1 signal to control the production of IL-12.
Discussion
Mitochondria provide energy for cellular activity and thus play key roles in cell survival, apoptosis, and metabolism. Mitochondrial morphology switching between fusion and ssion affects mitochondrial function and leads to changes in the production of ROS and mitophagy. Previous studies have shown that increased levels of ROS damage HCs in noise-, drug-and age-related hearing loss [30][31][32] . The mitophagy protects against HC injury caused by noise exposure, ototoxic drug treatment, and age-related degeneration [8,33,34] . However, the roles of mitochondrial membrane proteins regulating mitochondrial morphology, including Mitofusin 1 (MFN1) and MFN2, in HCs remains unclear due to embryonic lethality of de cient mice. OPA1, as a protein of regulating mitochondrial inner membrane fusion, causes SHNL through auditory neuropathy but not HC injury. Recent studies showed that Fam73a and Fam73b KO mice are viable and can be used as a suitable model to study the in uence of mitochondrial dynamics on auditory function in the inner ear. Our results indicate that FAM73a and FAM73b de ciency results in HC damage through increased production of ROS and the destruction of stereocilia structures. These results suggest that the absence of FAM73a and FAM73b has a harmful effect on HCs through oxidative stress rather than a protective effect through mitophagy.
Previous studies have shown that HC injury increases the number of macrophages and transforms macrophage morphology into an activated shape [24,25] . Our study shows that the number of macrophages signi cantly increases accompanied by larger in size when HCs are severely damaged. To further explore the role of macrophages in HC death, we examined the phagocytic capacity of macrophages. However, the phagocytic capacity of macrophages localized at the luminal surface of the scale tympani cavity did not change in Fam73a and Fam73b KO mice compared with the controls. Intraperitoneal injection of LCCA not only deplete macrophages, but also signi cantly reduced HC damage. Therefore, we believe that macrophages play a detrimental role in HC injury. These results indicate that HC injury activates macrophages by releasing certain endogenous factors triggering the cochlear immune response, and the next studies should identify these molecules and clarify their inducible mechanism.
However, there is no report on the expression of IL-12 and IFN-γ in these models with cochlear aseptic in ammation. Clinical studies have found that TNF-α might play critical roles in sudden SNHL based on blood samples from patients, while IL-12, IFN-γ and IL-10 have been shown not to participate in the pathophysiology of sudden SNHL [39,40] . We found that the expression of proin ammatory cytokines IL-12 and IFN-γ increased, while the anti-in ammatory cytokine IL-10 decreased in Fam73a and Fam73b KO mice, and this suggests a novel mechanism of in ammation in SNHL occurrence caused by the disorder of mitochondrial morphology.
Previous studies have shown that the production of in ammatory cytokines in the cochlea is attributed to the activation of toll-like receptors on the surface of macrophages [41] under noise exposure or ototoxic drug treatment. TLR4 promotes the production of ROS and the activation of the downstream NF-kB signaling pathway [42][43][44] . However, our results presented a novel signal pathway which is composed by Parkin-CHIP-IRF1 [26] . This signal pathway regulated the production of IL-12 in macrophages is independent of ROS or NF-kB, which is different from previous studies. Our data show that the expression of IRF1 and Parkin in macrophages is increased, while the level of monoubiquitinated CHIP is decreased.
These results indicate that macrophages modulate the production of in ammatory cytokines through a novel signal pathway in the cochlea of Fam73a and Fam73b KO mice.
Although macrophages representing innate immune responses have been well studied in the cochlea, the role of adaptive immune responses remains unclear. Adaptive immune cells have been shown to be involved in noise-induced cochlear damage [18] . CD4 + T cells exist in the modiolus of the cochlea under physiological conditions and can enter the basilar membrane of the cochlea together with macrophages in response to noise-induced damage [19,45] . Additionally, inhibition of nuclear factor of activated T cells (NFAT) protects HCs against aminoglycoside ototoxicity [46] . Although these studies have shown the presence of T cells in the inner ear and their activation can damage HCs, its underlying mechanism affecting HC injury remains unknown. Our results showed that the number of CD4 + T cells in the basilar membrane and the expression of IFN-γ derived from CD4 + T cells is signi cantly elevated in the cochlea of Fam73a and Fam73b KO mice. These results implies that CD4 + T cells induced the damage of HCs by secreting IFN-γ, which is the rst time to report the T cell-mediated mechanism in HCs damage.
In conclusion, our study provides disrupted mitochondrial morphology switching as a new risk factor for SNHL. | 6,966.4 | 2021-09-24T00:00:00.000 | [
"Biology"
] |
Advanced Cold Molecule Electron EDM
Measurement of a non-zero electric dipole moment (EDM) of the electron within a few orders of magnitude of the current best limit of |d_e|<1.05 e -27 e cm would be an indication of physics beyond the Standard Model. The ACME Collaboration is searching for an electron EDM by performing a precision measurement of electron spin precession in the metastable H state of thorium monoxide (ThO) using a slow, cryogenic beam. We discuss the current status of the experiment. Based on a data set acquired from 14 hours of running time over a period of 2 days, we have achieved a 1-sigma statistical uncertainty of 1 e -28 e cm/T^(1/2), where T is the running time in days.
Introduction
At accelerators such as the Large Hadron Collider (LHC), particles of the highest accessible energies are used to probe physics at its most fundamental level. On a complementary front, the precise measurement techniques of atomic physics can access the vacuum fluctuations these massive particles produce. Because the search for the electron electric dipole moment (EDM) is a sensitive probe of new physics, this effort has long been at the forefront of such research [2] [3]. A high-precision measurement that discovers the electron EDM or sets a stringent new limit upon its size would place strong constraints on extensions to the Standard Model of particle physics (SM). A general feature of SM extensions is the prediction of an EDM for electrons and nucleons, with many theories indicating an electron EDM just below the current upper limit [4] [5] (d e < 1.05 × 10 −27 e · cm with 90% confidence [1], measured by the Hinds group). The symmetries of the SM, on the other hand, strongly suppress EDMs, giving rise to electron EDM predictions over a hundred billion times smaller than the current limit [6]. One well motivated SM extension is supersymmetry. Supersymmetric models require fine tuning of supersymmetric parameters to fit the current EDM limits [7] [8]. An electron EDM measurement that is 10-100 times as sensitive as the current upper bound must either observe an EDM, revealing a breakdown of the Standard Model, or set a new limit requiring such unnatural suppression of supersymmetric parameters that many supersymmetric models would have to be revised or rejected [9].
The Advanced Cold Molecule EDM Experiment (ACME) [10] is a new effort to measure the electron EDM using thorium monoxide (ThO). ThO is a polar molecule with two valence electrons. In the H 3 ∆ 1 state [11], one of these electrons occupies a σ-orbital, and its EDM is relativistically enhanced due to the Sandars effect [12], while the other valence electron occupies a δ-orbital and allows the molecule to be easily polarized. The σ-state electron interacts with approximately 20 full atomic units a e-mail<EMAIL_ADDRESS>[19] [20] ACME ThO ∼ 1 × 10 −28 Experiment in progress [10] of effective electric field (∼ 100 GV/cm) in a molecular state that can be oriented with very modest laboratory fields (∼ 10 V/cm) [13]. The interaction of this effective molecular field with a non-zero electron EDM would manifest itself as a phase shift in ACME's Ramsey-type measurement protocol. Taking advantage of recent improvements in technologies and methods, including a new slow, cold, and intense beam source [14] and ThO's near-ideal 3 ∆ 1 state structure (see e.g. [10][15] [16]), we have developed an experiment with the unprecedented electron EDM statistical sensitivity of about 1×10 −28 e · cm in one day of averaging time. This is 10 times better than the current experimental limit [1]. As discussed below, ACME's systematic errors are also projected to be smaller than those of past experiments and can be checked with high precision on the time scale of days. We are currently studying various possible sources of systematic error in preparation for reporting a new result.
Atomic and molecular electron EDM experiments
The signature of a permanent electron EDM, d e , is an energy shift ε EDM of an unpaired electron (or electrons) in an electric field E: In the vicinity of some atomic nuclei, electrons experience very strong electric fields [12][21] [22]. These internal atomic and molecular fields can be partially or completely oriented by polarizing the atom or molecule, which together with relativistic effects gives the electron EDM a non-zero average energy shift. Per Eq. (1), this shift can be interpreted as an interaction between d e and an average effective electric field E eff produced by the atomic nucleus. The size of E eff can be shown to scale approximately as the cube of the atomic number Z [23]. Thus, the species that yield the most sensitive (i.e. largest ε EDM ) electron EDM measurements are heavy (large Z), highly polarizable atoms and molecules with unpaired valence electrons whose wavefunctions have a large amplitude near the nucleus. These principles have guided the search for electron EDM for the last fifty years, during which time the strongest limits have consistently been set by atomic and molecular experiments. Table 1 summarizes the two most recent EDM upper bounds, obtained with atomic thallium (Tl) and the polar molecule ytterbium fluoride (YbF), and compares the sensitivity of these experiments with ACME's demonstrated sensitivity.
Thorium monoxide electron EDM
ACME's molecule of choice, ThO, combines the aforementioned benefits of a high-Z, polar molecule with several other powerful advantages. These properties of ThO conspire to increase ACME's statistical sensitivity compared to previous electron EDM experiments, mitigate the technical demands of working with molecules rather than atoms, and suppress or rule out many systematic errors [10].
Meyer and Bohn [11] have calculated the effective internal electric field E eff of fully polarized ThO to be ∼ 100 GV/cm, which is among the largest of any investigated species. This field is nearly 4 times as large as the estimated field in fully polarized YbF [24], nearly 8 times as large as the E eff achieved in partially polarized YbF in the Hinds experiment [17], and over 1000 times larger than the E eff achieved in the Tl experiment [19]. Moreover, ThO possesses a low-lying metastable state H 3 ∆ 1 (see Fig. 1), which exhibits several features beneficial to an EDM experiment. Firstly, it has a measured lifetime of 1.8 ms [10], sufficient to perform our Ramsey experiment in a molecular beam with a coherence time of 1.1 ms (see Section 3.1). This is comparable to the coherence times in both the YbF (642 µs [1]) and the Tl (∼2.5 ms [25]) electron EDM experiments. Secondly, the spin and orbital magnetic moments of a state with 3 ∆ 1 angular momentum cancel almost perfectly [10], and the residual g-factor is measured to be g H,J=1 = 4.3(3) × 10 −3 [13]. 1 This small magnetic moment renders the experiment highly insensitive to magnetic field imperfections.
Finally, the most advantageous property of the H 3 ∆ 1 state of ThO is its extremely large static electric dipole polarizability resulting from a pair of nearly degenerate, opposite-parity sublevels split by only a few hundred kHz [11][26] [15]. This level structure gives polarizabilities on the order of 10 4 or more times larger than for a more typical diatomic molecule state, in which an applied electric field polarizes the molecule by mixing opposite-parity rotational levels typically spaced by many GHz. The opposite-parity sublevels H, J = 1 state are formed by even and odd combinations of molecular orbitals with opposite signs of the quantum number Ω ≡n · J (the projection of the total angular momentum on the molecular bond axis) and are a general feature of states with Ω ≥ 1 in Hund's case (c) molecules [27] [28]. Such "Ω-doubled" states are immensely valuable to electron EDM searches because they can be fully mixed in electric fields of only a few tens or hundreds of V/cm, completely polarizing the molecule [28] [29]. Thus, EDM experiments on molecules with Ω-doublets can take full advantage of the molecules' effective internal field while avoiding the technical challenges and potential systematic errors introduced by large lab fields. Furthermore, because the effective electric field in a fully polarized molecule is independent of the externally applied electric field E, the electron EDM signal is also independent of the magnitude of the applied field [see Eq. (6)], allowing such experiments to set limits on systematic effects correlated with |E|. Another benefit of the Ω-doublet in ThO is that the polarized H-state molecule can be spectroscopically prepared with its dipole either aligned or anti-aligned with E, allowing us to switch the sign of the electric field experienced by the electron EDM without physically changing the laboratory field [30]. As discussed in Section 4.2, this provides a way to rule out systematic errors correlated with the sign of the applied field, such as leakage currents, motional magnetic fields, and geometric phases [10] [31]. The ACME experiment is currently taking data to improve its statistics and set limits on possible systematic errors.
Besides these features, ThO also provides manifold technical advantages. All of the relevant optical transitions (see Fig. 2 [38] and accessible to diode lasers. In addition, ThO has no nuclear spin and so avoids the complexities of hyperfine structure. Finally, despite the fact that ThO is chemically reactive and its precursors are highly refractory, it can be produced in large quantities in a cryogenic buffer gas beam [14] (see Section 3.2).
ACME experiment overview
In order to measure the electron EDM, ACME produces a high-flux beam of ThO and uses an optical state preparation and readout scheme to detect the Ramsey fringe phase shift resulting from a non-zero d e · E eff . The measurement and apparatus are described here.
Measurement scheme
The ACME apparatus and measurement scheme are illustrated in Fig. 3 and described in [10]. Molecules from the beam source enter the interaction region and are intercepted by an optical pumping laser tuned to the X → A transition (see Fig. 2). Excitation by this laser and subsequent A H spontaneous decay populate the H state. The measurement is performed in select sublevels in the ground ro-vibrational In the absence of applied E ẑ and B ẑ fields, the stationary states are the Ω-doubled parity eigenstates 1 √ 2 (|Ω = +1 ± |Ω = −1 ), which are split by a few hundred kHz (solid gray lines). E-fields of ∼ 10 V/cm fully mix these doublets in the M J ≡ẑ·J = ±1 states by resolving the aligned and anti-aligned orientations (N ≡ sgn(n · E) = sgn(ẑ · E)M J Ω = ±1) of the internuclear axisn. The linear Stark splitting between these N states (dotted gray lines) is measured to be 2.13 MHz/(V/cm). In an applied B-field, the measured Zeeman shift (dashed gray lines) between the M J = ±1 states of each N sublevel is ±12 kHz/G [13]. If d e 0, these M J levels experience an additional relative shift equal to ±2d e E eff (solid black lines). These relative shifts are in opposite directions in the two N levels since E eff points in opposite directions. The [33]. All relevant states are in the ground vibrational level. The electronic states are denoted by letters, and the angular momentum character of each state is indicated by molecular spectroscopy symbols. The wavelength of each transition is given in nm. The ACME measurement scheme makes use of both diode laser pumped excitations (solid arrows), and spontaneous decays (dotted arrows), as described in Section 3.1. level (v = 0, J = 1) of the H state. In the absence of an applied electric field E, sublevels in this manifold are identified by their quantum numbers M J = ±1, 0 (projection of J along the lab-frame quantization axisẑ), and P = ±1 (parity). The opposite-parity Ω-doublet levels in the H state have a very small splitting (∼ 400 kHz [11][13] [26]), which we neglect. When a sufficiently large (more than ∼ 10 V/cm) electric field E is applied collinear withẑ, the P = ±1 sublevels with the same value of M J mix completely; the resulting eigenstates have complete electrical polarization, described by the quantum number N ≡ sgn (n · E) = ±1. (The M J = 0 sublevels do not mix.) The relevant energy levels are shown in Fig. 1. The tensor Stark shift ∆ St is defined as the magnitude of the shift of the oriented |M J | = 1 levels from the unperturbed M J = 0 levels. A magnetic field B ≈ 10 mG is also applied collinear withẑ, lifting the degeneracy of the M J = ±1 levels.
Since the H state is populated by spontaneous decay from A, it is initially in a mixed state, with all sublevels used in the experiment approximately equally populated. By coupling the molecules to a strong state-preparation laser driving the H → C transition, we deplete the coherent superposition of |M J = ±1; N that couples to the laser polarizationǫ p , leaving behind a dark state. With the laser polarizationǫ p =ŷ for example, the prepared state of the molecules is [h] Fig. 3. Schematic of the ACME apparatus and measurement described in Section 3. On the left, a pulse of gasphase ThO molecules is produced and cooled in a buffer gas cell and flows out towards the right in a beam (see N, E, B where g H,J=1 = 4.3(3) × 10 −3 and d H,J=1 = 0.84(2) ea 0 are the magnetic g-factor and electric dipole moments of the H, J = 1 state respectively [13], µ B is the Bohr magneton, e is the electron charge, and a 0 is the Bohr radius. The terms (from left to right) give the interaction of the magnetic dipole with the external magnetic field, the Stark shift ∆ St , and the interaction of the electron EDM with the effective molecular field. Here we assume that the H-state is fully polarized, which occurs in external fields of ∼ 10 V/cm, much smaller than the typical experimental field of 140 V/cm. The magnitudes of applied field vectors are given in Roman font, e.g. B = |B|. The hat denotes the sign of a quantity's projection on the lab-fixed quantization axis of the experiment, e.g.B = sgn(ẑ · B). This simple formula neglects a large number of important terms, such as the electric field dependence of the g-factors [39], background fields, motional fields, etc., but this expression will be sufficient to explain the basic measurement procedure. After free evolution during flight (over a distance L = 22 cm in our experiment), the final wavefunction of the molecules is For a molecule with velocity v along the beam axis, the accumulated phase φ can be expressed as Using the fact that our beam source has a narrow forward velocity distribution (with average forward velocity v and spread ∆v ≪ v, see Section 3.2), we make the approximation that all molecules experience the same phase shift as they traverse the interaction region. Furthermore, because the Eand B-fields are highly uniform along the length of the interaction region, we can pull out the integrand and write: for all molecules in the beam. The phase φ is detected by measuring populations in two "quadrature components" |X N and |Y N of the final state, where we define The quadrature state |X N (|Y N ) is independently detected by excitation with a laser coupling the H and C states whose polarization isǫ d =x (ǫ d =ŷ). The C state quickly decays to the ground state, emitting fluorescence at 690 nm, which we collect with an array of lenses and focus into fiber bundles and light pipes. These in turn deliver the light to two photomultiplier tubes (PMT's), 2 where it is detected. This scheme allows for efficient rejection of scattered light from the detection laser since the emitted fluorescence photons are at a much shorter wavelength than the laser. The probability of detecting a molecule in the quadrature state |X N (|Y N ), given by P , can be expressed as P X = cos 2 φ (P Y = sin 2 φ). The detected fluorescence signal from each quadrature state is proportional to its population. We express these signals (S X and S Y ) as a number of photoelectron counts per beam pulse, and write S X(Y) = S 0 P X(Y) , where S 0 is the total signal from one beam pulse. Thus, S X and S Y trace out two sinusoidal curves (or Ramsey fringes) of opposite phase as a function of applied magnetic field. For the highest sensitivity to d e , we "sit on the side of the Ramsey fringe" where small changes in φ E are most noticeable, i.e. where ∂/∂φ E [S X(Y) ] is maximized. Therefore, we adjust the magnetic field to yield a bias phase |φ B | = π/4 and rewrite S X and S Y as 2 Hamamatsu R8900U-20 Then the EDM phase φ E can be determined by constructing the quantity A, known as the asymmetry: Note from Eq. (7) that φ E is odd in E and N, even in B, and proportional to E eff . In Section 4 we discuss how to use these correlations to isolate the EDM term from various systematic effects. The shot-noise limited statistical uncertainty in φ E is 1/(2C √ N), where N is the total number of photon counts and the quantity C introduced in this expressions is the Ramsey fringe contrast (or visibility), which accounts for inefficiencies in state preparation and varying precession times for different molecules. Therefore, the shot-noise limited uncertainty in the measured EDM value is [from differentiating d e with respect to φ E in Eq. (7)] [10] δd e = 2CτE eff (ṄT ) 1/2 , where τ = L/v is the precession time of the molecules in the fields,Ṅ is the time-averaged counting rate of the detectors, and T is the total experimental running time. The quantities τ and E eff are determined by physical properties of the H-state, as described above, and the large ThO fluxes achieved by the ACME beam source help to keep our uncertainty low by providing largeṄ.
ThO buffer gas beam
ACME uses a cryogenic buffer gas beam source to achieve high single-quantum-state intensities of the chemically unstable molecular species ThO. The heart of the cold beam apparatus, the buffer gas cell (see Fig. 3), is similar to those described in earlier buffer gas cooled beam publications [40][41][42] [43]. Our ACME beam was carefully characterized and described in [14]. The cell is a small copper chamber mounted in vacuum and held at a temperature of 16 K with a Cryomech PT415 pulse tube cooler. Cold neon buffer gas flows into the cell through a fill line at one end of the cylindrical volume, and at the other end of the cell, an aperture 5 mm in diameter in a thin (0.5 mm) plate is open to the external vacuum, allowing the buffer gas to flow out as a beam. The cell is surrounded by two nested chambers of metal that are also thermally anchored to the pulse tube cooler. The inner chamber is held at 4 K and acts as a high-speed, large-capacity cryopump for neon, maintaining a high vacuum of ∼ 3 µTorr in the system despite large buffer gas throughputs. The outer chamber is kept at 50 K and serves to shield the inner cryogenic regions from blackbody radiation emitted by the room temperature vacuum chamber. Both the 4 K and the 50 K chambers have a window to admit the ablation laser and apertures to transmit and collimate the buffer gas beam. The source of ThO molecules is a ceramic target of thoria (ThO 2 ) made in-house using established techniques [44] [10]. ThO molecules are introduced into the cell via laser ablation: A Litron Nano TRL 80-200 pulsed Nd:YAG laser is fired at the ThO 2 target, creating an initially hot plume of gas-phase ThO molecules. The ablation pulse energy is set to 75-100 mJ and the repetition rate to 50 Hz. On a time scale rapid compared to the emptying time of the cell into the beam region, the hot ThO molecules thermalize with the 16 K buffer gas in the cell. Continuous neon flow at ∼ 40 SCCM (standard cubic centimeters per minute) maintains a buffer gas density of n 0 ≈ 10 15 -10 16 cm −3 (≈ 10 −3 -10 −2 Torr, where the subscript "0" indicates the steady-state value of the quantity in the cell). This is sufficient for rapid translational and rotational thermalization of the molecules and for producing hydrodynamic flow out of the cell aperture that entrains a significant fraction of the molecules before they can diffuse to the cell walls and stick. The result is a 1-3 ms long pulsed beam of cold ThO molecules embedded in a continuous flow of buffer gas.
Just outside the cell exit, the buffer gas density is still high enough for ThO-Ne collisions to play a significant role in the beam dynamics. The average thermal velocity of the buffer gas atoms is higher than that of the molecules by a factor of √ m mol /m b , where the subscripts "b" and "mol" indicate buffer gas and molecule quantities, respectively. Consequently, the ThO molecules (m mol = 248 amu) experience collisions primarily from behind, with the fast neon atoms (m b = 20 amu) pushing the slower ThO molecules ahead of them as they exit the cell. This accelerates the molecules to an average forward velocity v f that is larger than the thermal velocity of ThO. As the buffer gas pressure in the cell is increased, v f approaches v 0,b , the thermal velocity of the buffer gas. The angular distribution of a beam has a characteristic apex angle θ given by tan(θ/2) ≡ ∆v ⊥ /2v f , where ∆v ⊥ is the transverse velocity spread of the beam. For the ACME beam, the apex angle is θ ≈ 30 • , and the characteristic solid angle is Ω ≈ 0.3 sr. The beam velocity is measured to be ∼ 180 m/s. As the gas cloud expands nearly isentropically out of the cell into the vacuum, it must also cool. The measured final longitudinal and rotational temperature of the beam is ∼ 4 K, yielding a forward velocity distribution ∆v of ∼ 30 m/s FWHM (full width at half maximum) and efficiently populating low-lying rotational levels in the ground electronic state (e.g. ∼ 30% in J = 1). The total number of molecules per pulse in the few most populated quantum states is measured to be ∼ 10 11 . This slow, cold, high-intensity molecular beam provides ACME with a long interaction time τ over a short distance, low phase decoherence due to the narrow velocity spread, and a high count rateṄ. Figure 4 shows some example data collected using the scheme described in Section 3. As derived in Section 3.1, this measurement scheme determines the accumulated phase due to the energy shift between the two M J levels in either N state. This energy shift is given by [see Eq. (3)]:
Data analysis
If we wish to measure d e in a way that is insensitive to noise or uncertainty in the external magnetic field B, we can repeat the measurement with both ±B and take the sum of the measurements, ∆ε(N, E, B) + ∆ε(N, E, −B) = 4d e E eff NÊ. We can then take the difference of the measurements to isolate the magnetic field interaction, ∆ε (N, E, B In other words, since the spin precession in the magnetic field is "B-odd" (reverses when B is reversed), and the electron EDM precession is "B-even", we can distinguish them by taking repeated measurements with reversing magnetic fields and looking at sums or differences of those measurements. Notice that we can also separate the spin and EDM precession by reversing N or E since the two terms also have opposite parity under reversal of those quantities. Table 2. Shot-noise limited electron EDM uncertainty estimated from measured and calculated quantities. The measured uncertainty is about 1.4 times the shot noise limit. Quantities in bold are ingredients in Eq. (13). All quantities other than the effective electric field E eff are either experimental inputs or are derived from measurements taken in the ACME experiment's ordinary running configuration as described in the text.
Quantity Value Formula
Effective electric field 104 ± 26 GV/cm [ In a real experiment a number of uncontrolled effects are present, including background fields, correlated fields (e.g. magnetic fields from leakage currents which reverse synchronously with E), motional fields, geometric phases, and many more [2]. Despite the best experimental efforts, these effects may cause energy shifts larger than the electron EDM; however, we can isolate the electron EDM from these effects using its unique "NEB = − − +" parity, i.e. odd parity under molecular dipole or electric field reversal and even parity under magnetic field reversal.
If we perform 8 repeated experiments, with each of the 2 3 = 8 combinations of ±N, ±E, ±B, we can take sums and differences to compute the 8 different possible parities under N, E, B reversals, as shown in Table 3. Apart from higher-order terms, such as cross-terms between background electric and magnetic fields, the electron EDM is the only term with NEB = − − + parity. This technique of isolation by parity is how EDM experiments can perform sensitive measurements of the electron EDM with achievable levels of control of experimental parameters. We also perform a number of auxiliary switches to check for other systematic dependences of the NEB = − − + signal, such as rotating the polarization angle of the pump and probe lasers and interchanging the positive and negative field plate voltage leads.
Statistical sensitivity
The shot-noise limited sensitivity of the ACME experiment is given by Eq. (13). Other sources of technical noise may cause the achieved experimental sensitivity to be larger, but our measurements indicate that we are very near the shot noise limit [45]. Table 2 derives ACME's expected shot-noise limited statistical EDM sensitivity from measured and calculated quantities. In this table, the interaction time τ is equal to the length of the interaction region L = 22 cm divided by the measured beam velocity v = 180 m/s [14]. The contrast C is determined by measuring the slope of the Ramsey fringe at |φ B | = π/4. The count rate can be determined directly, by converting the PMT signal to a photon number, or indirectly, by starting with the measured molecule beam intensity and multiplying by the efficiency of each step in the measurement scheme. The molecule beam brightness in a single M J sublevel of |X, J = 1 was reported in [14], and the solid angle of the molecular beam used in the measurement is where each value in Eq. (16) was measured separately. The fluorescence detection efficiency is the product of the measured geometric collection efficiency of the detection optics (∼ 14%) and the quantum efficiency of the PMT's (10%). The duty cycle is the fraction of the time during the run that data is being collected. ACME's duty cycle is presently around 50% because of the time required to switch various parameters (e.g. laser polarization angle), degauss the magnetic shields, optimize the ablation yield, and tune up the lasers during the run. Figure 5 shows a set of EDM data (with an unknown blind offset added during data processing) taken over a total of 14 hours on 2 different days. The 1-sigma statistical uncertainty in the EDM from this plot is 1.6 × 10 −28 e · cm in 14 hours. This corresponds to a 1-sigma statistical error bar of about 1 × 10 −28 e · cm in one day of averaging time, which is consistent within uncertainty with 1.4 times the shot-noise limit estimated in Table 2.
Systematic checks
As discussed above, the particular behavior of the electron EDM under reversal of applied electric field, applied magnetic field, and molecule electric dipole orientation allows for powerful rejection of systematic effects. In order to test our ability to reject experimental imperfections, we can purposely amplify these imperfections and study their effect on our measured electron EDM. Say that some quantity X (for example, a non-reversing electric or magnetic field) mimics the electron EDM Table 3. Parity of energy shifts of selected effects in the ACME measurement. The difference between the gfactors of the two N-states of H is ∆g [39], and the subscript nr denotes the non-reversing component of an applied field. Products of terms denote correlations between those terms. The terms with + − − parity are higherorder and negligibly small.
+ + +
Electron spin precession in background (non-reversing) magnetic field B nr , Pump/probe relative polarization offset + + − Electron spin precession in applied magnetic field + − + Leakage currents B leak − + + ∆gB nr , ∆gB leak E nr + − − -− + − Electric-field-dependent g-factors [39] − − + Electron EDM − − − ∆gE nr according to the relation d e,false (X) = αX. If the quantity X can only be determined or controlled to the level X control , then our measurement will have a systematic uncertainty due to imperfections in X of order δd e,X ≈ |αX control |. The quantity X control can typically be determined with direct measurements (magnetometers to measure magnetic fields, spectroscopic techniques to measure electric fields, optical cavities to determine laser noise, etc.), but it remains to determine α. The general technique to determine α is simply to measure d e with varying values of X and fit the functional form of d e,false (X). At the time of this writing, no known systematic effects in the ThO experiment, including effects due to background fields, motional fields, and geometric phases, are expected to be larger than ∼ 10 −32 e · cm, well below the statistical sensitivity of the experiment in reasonable averaging time [10]. Nevertheless, we are currently in the process of varying a large number of experimental parameters to look for unexpected systematic effects.
Conclusion
The discovery of an electron EDM or an improvement on its upper limit by an order of magnitude or more would have a significant impact on our understanding of fundamental particle physics. We have described an ongoing experiment to search for the electron EDM using cold ThO molecules. This experiment has achieved a one-sigma statistical uncertainty of 1 × 10 −28 e · cm/ √ T , where T is the running time in days. This advance over previously published electron EDM experiments was made possible by the combination of a greatly increased molecular flux provided by our new cold molecular beam source and our choice of the ThO molecule, which is fully polarizable in small fields and has the highest effective electric field of any investigated species. We are now working to put limits on systematic errors that may be present in the experiment. ThO, due to its advantageous level structure, is particularly well suited to the suppression and rejection of systematic effects while searching for the electron EDM. | 7,315 | 2013-07-05T00:00:00.000 | [
"Physics"
] |
Modal noise mitigation for high-precision spectroscopy using a photonic reformatter
Recently, we demonstrated how an astrophotonic light reformatting device, based on a multicore fibre photonic lantern and a three-dimensional waveguide component, can be used to efficiently reformat the point spread function of a telescope to a diffraction-limited psuedo-slit [arXiv:1512.07309]. Here, we demonstrate how such a device can also efficiently mitigate modal noise -- a potential source of instability in high resolution multi-mode fibre-fed spectrographs). To investigate the modal noise performance of the photonic reformatter, we have used it to feed light into a bench-top near-infrared spectrograph (R {\approx} 9,500, {\lambda} {\approx} 1550 nm). One approach to quantifying the modal noise involved the use of broadband excitation light and a statistical analysis of how the overall measured spectrum was affected by variations in the input coupling conditions. This approach indicated that the photonic reformatter could reduce modal noise by a factor of six when compared to a multi-mode fibre with a similar number of guided modes. Another approach to quantifying the modal noise involved the use of multiple spectrally narrow lines, and an analysis of how the measured barycentres of these lines were affected by variations in the input coupling. Using this approach, the photonic reformatter was observed to suppress modal noise to the level necessary to obtain spectra with stability close to that observed when using a single mode fibre feed. These results demonstrate the potential of using photonic reformatters to enable efficient multi-mode spectrographs that operate at the diffraction limit and are free of modal noise, with potential applications including radial velocity measurements of M-dwarfs.
INTRODUCTION
Since the first discovery of a planet orbiting a main-sequence star in 1995 [2], the search for exoplanets has been a major focus of modern astronomy. Current technology can only directly image very long-period planets using the very largest telescopes, therefore detection techniques require us to look at the effect the planet has on its host star. The two most successful techniques are transit photometry [3], where the variation in the brightness of a star as the planet transits across it is measured, and the radial velocity (RV) method [4], which involves measuring the Doppler shift in the host star's spectrum due to the motion around the system barycentre. When these methods are used in combination, a great deal of information can be gathered about the exoplanet in question, including the density [5]. Typically, an exoplanet candidate is identified using a survey telescope, which relies on transit photometry. A famous example of such a telescope is the recently retired Kepler Space Telescope, which has allowed the identification of more than half of all confirmed exoplanets. Among others, the recently launched TESS (Transiting Exoplanet Survey Satellite) will observe the entire sky during its mission, and the PLATO mission (PLAnetary Transits and Oscillations of stars) is due for launch in 2026. They are expected to provide many new targets in the coming years. Kepler was designed specifically for the detection of Earth-Sun system analogues [6], and upon analysing the data, the prevalence of terrestrial planets detected was highly encouraging, although a poor detection rate of smaller radius planets was identified [7]. Following this discovery, Dressing & Charbonneau extended the Kepler data to smaller planets around smaller stars, and estimated that M-dwarfs hosted habitable near-Earth-sized planets at a rate of 0.15 −0.06 +0. 13 per star [8].
M-dwarfs are small cool stars (~2500 K) with peak blackbody emission in the near-infrared (NIR) range, and may have an anomalously high chance of hosting exoplanets. According to the study by Mulders, Pascucci & Apai [9], there is an inverse correlation between stellar temperature and planet occurrence rates: planets around M stars occur twice as frequently as around G stars (such as the Sun). Indeed, an M-dwarf hosts one of the largest solar systems discovered to date (apart from our own) -TRAPPIST-1 [10]. The closest exoplanet to Earth, Proxima Centauri b, also has an orbit in the habitable zone of an M-dwarf [11]. A further advantage of performing RV measurements of these cooler, lower mass stars is the increased perturbations from habitable zone planets -the habitable zone being closer and the smaller mass disparity allows much easier detection of small rocky planets which could host life [12].
To date, the most successful RV spectrograph is HARPS (High Accuracy Radial velocity Planet Searcher), which operates over visible wavelengths between 380 and 690 nm, with a resolving power of 115,000, and uses a multi-mode (MM) fibre to feed light from the telescope focal plane to the instrument, which is placed in the observatory basement where environmental conditions are strictly controlled for maximum stability [13]. Such advantages of feeding spectrographs with optical fibres are profound and well-known [14,15]. HARPS also uses a simultaneous ultra-stable ThAr reference spectrum fed through an adjacent fibre, to allow an RV precision down to 30 cm s -1 [16]. Unfortunately, the use of silicon-based chargedcoupled-device (CCD) arrays in HARPS restricts its observations to visible wavelengths, preventing efficient RV measurements of M-dwarf stars whose blackbody emission peaks in the NIR. There is therefore a strong pull to develop high precision (~1 m s -1 ) spectrographs for NIR RV measurements, but this capability still remains to be addressed. To highlight the difference in precision between wavelengths, CARMENES (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Echelle Spectrographs), which saw first light in 2015, exhibits a precision of 1-2 m s -1 in the visible, and 5-10 m s -1 in the NIR [17]. Roy et al. [18] also state this discrepancy in precision goals, these being ~10 cm s -1 in the visible and < 1 m s -1 in the NIR. One example of a visible spectrograph with a precision goal of 10 cm s -1 is ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) [19], and an example of an NIR spectrograph with a precision goal of 1 m s -1 is NIRPS (Near Infra-Red Planet Searcher) [20].
The use of fibres for transporting light from the telescope focal plane to the instrument is highly advantageous for environmental stability reasons, but they are not without drawbacks. For example, modal noise [21,22] is a phenomenon that occurs where the pattern of light at the output of the fibre evolves with time, due to fluctuations in the distribution of optical energy or relative phases of the guided modes. Modal noise will arise when the stellar image at the fibre input changes (either a change in its position as the telescope slews, or from changing atmospheric conditions), or if the fibre bends (due to the telescope slewing, thermal variations or air currents). This drastically reduces the signal-tonoise [23] and severely limits the accuracy of the spectrograph. A further issue with the modal noise is that it cannot be eliminated by calibration, since the injection of the calibration source will never exactly match that of the star and thus the different coupling exacerbates the modal noise.
Modal noise is obviously minimised by using a single mode (SM) fibre to feed the spectrograph, but at the expense of telescope-fibre coupling efficiency, since atmospheric turbulence causes wavefront distortions and produces a stellar image that is not diffraction-limited. Extreme Adaptive Optics (AO) can be used to increase the coupling efficiency, and is planned for several future instruments [24]. However this is expensive, and only possible where a suitably bright natural guide star is available, limiting the number of targets [25]. In any case, the point spread function (PSF) will still not be completely diffraction-limited [26]. Once in the MM regime, modal noise actually reduces as the number of spatial modes increases due to statistical averaging. The number of modes (in a step index fibre), Nmodes, is strongly dependent on the wavelength, λ, of a given fibre. This can be seen in equation (1): where a and NA are the fibre core radius and numerical aperture respectively.
The simplest approach to mitigate modal noise in MM fibrefed spectrographs is to agitate the fibre [23], which crosscouples the excited modes and averages the energy over all modes, including some that were not initially populated. This is most effectively done by hand due to the random nature of the hand movement [27], although an automated mechanical oscillator is more practical during observations. Other methods for mode-scrambling include using alternative fibre geometries e.g. octagonal or rectangular core fibres [28]. Annealed fibre has also shown effective mode-scrambling [29], since scattering centres are produced in the fibre that distribute light uniformly both radially and azimuthally. Laser speckle reducers are also available commercially and have been used to some success by Mahadevan et al. [30]. In all of these cases, however, it is important to stress that efficient mode-scrambling becomes increasingly difficult to achieve as the number of modes decreases, making modal noise a particular challenge in precision MM fibre-fed NIR spectrographs. It is for this reason that modal noise is not a significant problem for spectrographs like HARPS that operate in the visible, but was a limiting factor in GIANO at the Telescopio Nazionale Galileo (TNG) [31], which suffered from such significant modal noise that the engineers modified the entrance slit to bypass the fibre altogether [32]. This free-space approach does not take advantage of the environmental and optomechanical stability advantages offered by an optical fibre feed.
A potential alternative to standard modal-noise mitigation techniques relies on the exploitation of photonic lanterns (PLs) [33,34], guided wave transitions that efficiently couple light from a MM port to an array of SM waveguides. PLs can be created in a variety of ways. For example, a multicore fibre (MCF) with a two-dimensional (2D) array of SM cores can be heated and tapered to form a MM port when the taper is cleaved, with the taper forming a gradual transition between the MM port and the SM cores of the MCF. Such MCF-PLs are a well-suited method to eliminating modal noise in radial velocity spectrographs [35,36]. A second approach involves the tapering of a bundle of SM optical fibres to form a MM port in a similar manner. Regardless of the specific fabrication approach used, a spectrograph fed with MM starlight in the form of multiple SMs would be free of modal noise. To exploit the full potential of this capability, however, it is essential to correctly arrange the SMs generated by the PL at the slit of the spectrograph, such that the individual spectra from each SM do not overlap on the detector.
One approach to achieve this in MCF-PL-fed spectrographs is the TIGER approach [37], where the MCF is rotated to the correct angle, although this technique is only applicable to systems operating in the few-mode regime. Another approach proposed by Bland-Hawthorn et al. [38], known as the PIMMS (Photonic Integrated Multi-Mode Spectrograph) concept, would make use of PLs fabricated from multiple SM fibres. In this case, the individual SMs at the PL output can be arranged at will along the slit of the spectrograph.
Both the PIMMS and TIGER approaches can, in principle, enable very high-resolution MM spectrographs that exploit an Echelle grating for dispersion [39,40]. However, for a real spectrograph system, we ideally wish to use a single MCF-PL for capturing the telescope PSF and transporting it to a spectrograph, whilst also combining this capability with the mode-reformatting flexibility offered by PLs fabricated from individual SM fibres. As we have demonstrated previously [1], one solution is the combination of an MCF-PL with a threedimensional (3D) integrated optical waveguide modereformatting component fabricated using ultrafast laser inscription (ULI) -an advanced laser manufacturing technique.
We have demonstrated that this combination of technologies can be seamlessly integrated together to efficiently reformat the MM PSF of the CANARY AO system operating on the William Herschel Telescope to a SM pseudo-slit, with an onsky throughput of 53 ± 4 per cent in the H-band. Although the on-sky throughput results were promising, no results relating to the modal noise performance of the device were reported.
To address this, we characterised the device using a single wavelength to simulate high resolution, both experimentally and theoretically [41]. Initial findings suggested modal noise, whilst largely suppressed, was still present in the output of the device. This was in agreement with other experiments [42] and showed that to properly estimate the modal noise contribution from a photonic reformatter, a full characterisation of the device was required.
In this paper, we use a simple NIR spectrograph to investigate in detail the modal noise performance of the photonic reformatter demonstrated on sky in MacLachlan et al. [1], and compare its performance to SM and MM fibres. We call this device the "hybrid" reformatter since it integrates an MCF-PL and a ULI fabricated mode reformatting component. In Section 2 we will outline the experimental design of the spectrograph and the experimental techniques used to investigate the modal noise performance. In Section 3 we describe the data processing methods and the results obtained, which demonstrate that near-SM performance can be obtained using the hybrid reformatter. In Section 4 we link the precision of our device to potential scientific applications.
CAPTURE METHODS
We have designed an inexpensive bench-top spectrograph constructed from catalogue components, which enables observation and quantification of modal noise using different optical fibre feeds. For our purposes, light was fed into the spectrograph using three devices; the hybrid reformatter (HR), an SM fibre patch cord (SMF28e, FC/PC connectors at both ends) with a mode field diameter (MFD) of 10.4 ± 0.5 µm at 1550 nm and mode NA of 0.14, and a step index MM fibre with 50 µm core diameter and NA of 0.22. All devices had a length of approximately 2 m. A schematic of the spectrograph design, images of the end-facets of the three devices, and their respective typical output light patterns are presented in Fig. 1. A schematic of the complete HR is presented in Fig. 2. It is crucial to highlight that the number of guided modes supported in this 50 µm MM fibre at 1550 nm is around 124, similar to the 92 modes of the lantern, which has a core size of 43 µm, to give the fairest comparison possible.
For all characterisation experiments, the light source used was an SM fibre-coupled broadband amplified spontaneous emission (ASE) source (Thorlabs FL70002-C4) centred on 1560 nm, close to the centre of an M-dwarf emission spectrum. A bandpass filter (BPF) (Thorlabs FB1550-12) with a full width at half maximum (FWHM) of 12 nm was also used to ensure that the spectrum was within one free spectral range of the Echelle Grating, eliminating the need for cross-dispersion. Neutral density (ND) filters were placed in the input beam path when necessary to avoid saturating the camera. Lens L1 is a 15.58 mm focal length fibre collimation package (NA 0.16) and lens L2 is a 10 mm focal length and 8 mm diameter achromatic doublet, producing an image of the fibre mode which has an MFD of 6.7 ± 0.5 µm. This image is initially aligned with the input to the device under test (DUT) by maximising the throughput. Lens L3 has 25.4 mm focal length, and lens L4 has 500 mm focal length for a magnification of ≈ 20. This magnification has been selected to allow the image of the pseudo-slit created by the HR to fill the majority of the detector height. The Echelle Grating (EG) was sourced from Thorlabs (GE2550-0363) with 63º blaze angle and 31.6 lines mm -1 ; this is used due to the high efficiency in higher orders, which greatly increases the dispersion and is essential to achieving a high resolution. M1 is a 2'' square silver mirror used to fold the beam path and overcome restrictions from bulky optic mounts. This allows the grating to be used close to its design angle (the Littrow configuration). Finally, the output raw images are recorded by a Hamamatsu C10633-23 InGaAs camera, based on a detector cell of 256×320 pixels with a pixel size of 30×30 µm. The resolving power R of the spectrograph is λ/Δλ, where Δλ is the FWHM of the line function. The resolving powers we calculate for each DUT are as follows: R ≈ 9,500 for SM fibre, R ≈ 3,500 for MM fibre, and R ≈ 7,000 for the HR.
To mimic the effect of the atmosphere and telescope slewing as stars are tracked across the sky, the coupling into the fibre was adjusted in a semi-random meander across the facet while maintaining a throughput greater than 80 per cent of the maximum at optimal coupling. At each fibre position, 250 frames were captured at 15 ms exposure time with the camera. These frames were then added together to produce a single detector image for that fibre position, simulating a longer and thus more relevant exposure time. The input coupling is modified successively to obtain 60 of these images, which were then processed according to Appendix A. In the following section we discuss how the data were analysed using two different experimental protocols to quantify the modal noise when characterising each DUT. We also investigate the effect of shaking the DUT on the modal noise performance. To do so, a mechanical scrambling system was constructed by fixing the centres of 2 independently controlled loudspeakers to loops of the DUT, which is represented by S in Fig. 1(a).
A. Characterising modal noise from a broadband measured spectrum
In our characterisation system, modal noise manifests itself through changes in the measured spectrum as the input coupling is varied. The strength of the modal noise can therefore be determined by quantifying how the spectrum varies across the full data set. To do so, we compare the spectrum obtained from each of the 60 images to the mean spectrum across all images.
A higher degree of modal noise will result in larger differences, which can then be quantified statistically. More specifically, 60 spectra were obtained by summing each processed image along the spatial axis (as described in Appendix A). difference in the images taken using with and without shaking was low by eye. Figs. 3(e-h) presents spectra obtained from the raw data sets after processing, with the black line representing the average spectrum obtained across the 60 different input coupling positions, and the red line representing the spectrum out of the 60 that was most different from the average. The peak of the spectrum appears at slightly different positions on the camera from one DUT to another due to small differences in the physical positioning of the DUT relative to the spectrograph. There is no reason to expect these differences will affect the analysis.
The difference between the data and spectra obtained using the SM fibre (Figs. 3(a&e)) and the MMS data (Figs. 3(b&f)) is immediately apparent. Here we see the characteristically smooth spectrum obtained using the SM fibre, and the highly variable spectrum obtained using the MM fibre, the shape of which will vary with input coupling. The difference between the spectra obtained using the HR and HRS devices are also observable, and we highlight the impact of the scrambling process in Figs. 3(c&d). The HRS image appears very similar to the SM image spectrally, but with a large difference in the height of the area filled on the detector due to the length of the pseudo-slit. Supplementary videos will be made available with the journal submission showing the evolution of the 60 images for each DUT.
The data processing steps followed to quantify the modal noise using the acquired data are depicted in Fig. 4 to aid the reader. First, by elementwise-dividing the average spectrum by the n th (out of 60) measured spectrum ( Fig. 4(a)), we obtain a 308 value vector that represents the absolute deviations between the n th spectrum and average spectrum ( Fig. 4(b)). This vector is then normalised such that the mean value is 1, which accounts for any variation in the absolute power of the n th spectrum Fig. 4(c)). The deviation from unity in this vector is related to the modal noise. Once this process is applied to the full data set, a histogram of the 18480 values, sorted into 40 bins, can be plotted to represent the spectral differences across all the 60 spectra due to modal noise. As seen in Fig. 5, the histograms (data points) that are generated by this data processing are well approximated by a Gaussian distribution, the best fit of which is also presented. We have normalised the histograms so that the Gaussian fit has a peak value of 1 to allow a straightforward comparison. The strength of the modal noise can be represented by the standard deviation of the fitted Gaussian, σ, and these can be found in Table 1, along with the goodness-offit.
It is apparent that the modal noise present in the spectra measured using the HRS is greatly reduced compared to when using either the HR or MMS devices, and is approaching the performance using the SM fibre. It should be highlighted that the width of the histogram obtained using the SM fibre represents the experimental limits of our characterisation system. We see that the data obtained using the HR exhibits a factor of 2.6 reduction in modal noise compared to the data obtained with the MM device, and a factor 2.0 reduction in modal noise compared to the data obtained using the MMS device. Shaking the MCF of the HR reduces the modal noise by a further factor of 2.9. The data obtained using MMS exhibits only a factor 1.3 reduction in modal noise compared to the data obtained using the MM fibre. This brings the total modal noise mitigation to a factor 5.7 between the HRS and the MMS -using the same scrambling system, with a similar number of guided modes. These results clearly demonstrate the superior scrambling ability of the HR device in comparison to the MM fibre, and demonstrates our first method to quantify how a photonic reformatter such as the HR can efficiently mitigate modal noise.
B. Characterising modal noise from the barycentre precision of spectral peaks
The results outlined in Section 3.1 provide a straightforward route to quantify modal noise and modal noise mitigation, but do not provide an immediate quantification of how modal noise affects the precision of a spectrograph. To address this, we have investigated a second method to quantify modal noise. This method uses the same characterisation system shown in Fig. 1, but with a Fabry-Pérot etalon placed in the input beam path between L1 and L2, converting the smoothly varying broadband light source into a series of discrete spectral peaks spaced by a regular frequency interval -the etalon free spectral range. As outlined previously, modal noise generates variations in the acquired spectra as the input coupling is varied. Here, we use the measured spectral stability of the etalon peaks under different input coupling conditions as a proxy for the strength of modal noise using different DUTs. The etalon we chose was sourced from LightMachinery and was made from solid fused silica with a thickness of 0.821 mm and surface reflectivities of ~0.885 and ~0.873 respectively. These parameters produce an etalon with a finesse of 23, generating spectral peaks with a width of approximately 40 pm spaced by ≈ 1 nm at 1550 nm. This etalon was chosen since the spectral peaks are sufficiently spaced such that they can still be resolved by the spectrograph when fed using light via the MM fibre.
In Figs. 6(a-d) we present the raw images of data acquired using three DUTs (again with both unshaken and shaken conditions for the MCF of the PL). In Figs. 6(e-h) we present how the acquired spectrum varies with input coupling, again with the solid black line indicating the average spectrum for 60 measurements, and a sample of 5 of these spectra at different coupling positions represented by the coloured dashed lines. Again, since there is a lack of visual difference between the MM fibre with and without shaking, only the MMS measurements are presented in Fig. 6. We again used the process outlined in Appendix A to correct for deviations in the straightness of the pseudo-slit and the angle between the pseudo-slit and the pixel axes of the camera.
For each of the spectral peaks generated by the etalon a Gaussian fit was made to determine the central wavelength (barycentre). The fit was typically over 10 pixels which gave a high confidence in the precision. The variation in the acquired spectra is due to modal noise while varying the input coupling, which in turn results in variations in the measured barycentres of each peak. Thus, the standard deviation of the 60 measured barycentres for each peak is our second measure of modal noise. This data is plotted in Figs. 7(a-b), where the x axis position of each data point represents the average measured barycentre for an etalon peak, and the y axis position of each data point is the standard deviation of the 60 measured barycentres for that spectral peak.
It is also useful to plot these barycentres as a ratio of the width of the peaks themselves, as this can link the precision to the resolution, plotted in Fig. 7(c). We calculate the peak width from the FWHM of the Gaussian fit. The large uncertainty for the MM/MMS measurements is due to the non-Gaussian shape of the peaks providing an imprecise fit. In Table 2 we present the mean values (and associated uncertainty from the standard deviation of those 16 values) of the barycentre stability over each of the 16 peaks, calculated from the standard deviation (SD) of 60 barycentres as seen in Fig. 7. It is apparent that the photonic approach using HRS offers a significant improvement over the MM fibre and a performance close to that measured using the SM fibre, and also that the mode scrambling system is highly effective -reducing the modal noise by a factor of 5 compared to when using the HR without shaking. The variation of the barycentre across the pseudo-slit with the HRS is 0.56 per cent of the peak width. The effect of shaking the MM fibre is again observed to be minimal compared to when shaking the HR. The mean SD as a percentage of the peak width for the SM fibre is equal to the HRS, showing their equivalent performance in this spectrograph.
C. Correcting for variations in the laboratory temperature
Our experiments were performed in a basic lab without the 0.01 K temperature control and vacuum chambers used in a state-of-the-art spectrograph. Temperature effects can therefore introduce instabilities in the spectrum that are not due to modal noise. For example, the true spectral positions of the etalon peaks may drift by 10 pm K -1 [43]. With the aim of accounting for the impact of laboratory thermal fluctuation on our barycentre method of quantifying modal noise we have also conducted the following additional analysis.
It is logical to assume that laboratory temperature drifts will have a very similar effect on the spectral peaks across the measurement, since the wavelength span of the measurement is very small. Therefore, by examining the manner in which the measured spectrum shifts for each of the 60 measurements compared to the mean spectrum measured across all 60, it Table 2. Average values of the barycentre precision presented in Fig. 7. We present the SD with a factor of 10 -3 removed for ease of comparison. For a given spectrum, we use the average deviation of 15 etalon peaks from their mean positions (across the 60 measurements) as a proxy to represent how much the spectrograph has drifted from its mean position due to thermal effects -we call this shift the "temperature proxy". The temperature proxy was observed to gradually vary over the 60 measurements with a full range of ~8 pm. This would, for example, correspond to an etalon temperature range of slightly less than one degree. We then spectrally shift each of the 60 spectra by the temperature proxy, such that the mean position of 15 peaks is the same across all 60 measurements. This will primarily compensate for variations in the laboratory temperature, leaving mainly the instability in the peak positions due to modal noise. A Fourier transform of the difference spectrum between the mean spectrum in Fig. 3(h) and any one of the 60 contributing spectra indicates that the modal noise in the HRS occurs with a period that is ~25 per cent shorter than the period of the etalon peaks. This means that every third etalon peak samples the modal noise with the same phase, meaning that the effect of the modal noise on the mean barycentre shift is negligible when considering every group of three adjacent etalon peaks. We therefore calculate the temperature proxy using the 15 etalon peaks which lie closest to the centre of the wavelength span (an integer of 3), rather than the 16 available, so that we do not subtract contributions to the mean barycentre shift that are due to modal noise.
The standard deviation of the position of each of the etalon peaks relative to their respective mean positions can then be recalculated, and plotted to generate "laboratory temperature corrected" versions of Figs. 7(b&c), presented as Figs. 8(a&b).
In Table 3 we present the mean values of the data shown in Fig. 8. When compared to the data presented in Table 2, it is clear that accounting for the effect of laboratory temperature increases the stability of the etalon spectral peaks by a factor of ~6 for both the SM and HRS DUTs. It is also interesting to note the shape of the curves shown in Fig. 8, where the etalon peaks are observed to be most stable in the middle of the spectral range. This, we believe, is due to the spectral peaks being physically wider on the detector array in the centre of the spectral range, becoming progressively narrower to either side (the difference between the widest and narrowest peaks are factors of 1.5 and 1.9 for SM and HRS respectively). This may be due to field curvature resulting from lens L4 which prohibits all wavelengths from simultaneously being in focus on the flat detector, and Zemax simulations support this belief. There is no (a) Standard deviation of 60 calculated barycentres when laboratory temperature correction is applied to the data, for each spectral peak at the given average wavelength, where green squares correspond to SM and light blue triangles to HRS; (b) Standard deviation of 60 calculated barycentres when laboratory temperature correction is applied to the data, plotted as a percentage of the respective peak's width, for each spectral peak at the given average wavelength. The spectra of each DUT were sampled at different positions which causes the slight offset on the graph. Table 3. Average values of the barycentre precision presented in Fig. 8. We present the mean SD with a factor of 10 -3 removed for ease of comparison.
Device
Mean SD (×10 -3 nm) Mean SD as % of peak width SM 0.28 ± 0.10 0.17 ± 0.06 HRS 0.41 ± 0.07 0.19 ± 0.07 reason to suggest that it will not be possible, with a more carefully engineered spectrograph, to achieve the stability observed with the etalon peaks in the centre of the spectral range. Fig. 8(b) indicates that the barycentres of the etalon peaks at around 1559 nm are stable to a thousandth of the width of the peak for both the SM and HRS DUTs. Based on a barycentre precision of 8 ± 2 per cent of the peak width for the MM fibre DUT, the HRS was observed to result in a factor of ~100 improvement in the barycentre stability. Anagnos et al. [44] have used beam propagation simulations to model the propagation of light through a photonic reformatting component similar to the HR device investigated here, and concluded that they should increase the barycentre precision by a factor of 1000 compared to a 50 µm core MM fibre. It is important to note, however, that since the data we have obtained using the SM DUT represents the modal noise measurement limit of our characterisation system and methods. Fig. 8(b) therefore merely demonstrates that HRS exhibits a level of modal noise that is not detectable using our experimental system and the methods we have described. With an improved experimental system and optimised experimental protocol, it is logical to expect that the graphs presented in Fig. 8 will both reduce in magnitude further and eventually separate, with the data for the SM DUT dropping further.
PRECISION
When using the HRS and our benchtop spectrograph, we are able to achieve a barycentre stability of 0.41×10 -3 nm after accounting for the effect of laboratory temperature variations. This would infer that a single spectroscopic line could be measured to an accuracy of ~80 m s -1 , assuming all other sources of noise are negligible. We therefore conclude that if our spectrograph (or similar) were placed in an environmentally controlled container, it would already operate with a precision close to that required for scientific applications e.g. for detecting hot Jupiters such as WASP-19b [45], which orbits a star of 12 th magnitude and requires a high-throughput spectrograph such as one enabled by this device. If the HRS was used to feed light into a higher resolution spectrograph (such as ≈ 120,000 offered by the current state-of-the-art) using a camera with smaller pixel size (15×15 µm is the current state of the art) we might expect a 0.1 per cent barycentre stability relative to the physical peak width to result in a single line radial velocity precision around 1 m s -1 , again assuming all other sources of noise are negligible. This is easily low enough to detect a terrestrial exoplanet in the habitable zone around an M-dwarf. We also note the fact that real RV measurements are almost always made by cross-correlating full spectra consisting of many spectral lines, and so it is reasonable to conclude that the achievable precision when limited by modal noise could be significantly higher.
CONCLUSIONS
We have developed a bench top near infrared spectrograph to characterise the modal noise performance of a photonic reformatter called the hybrid reformatter (HR) which reformats a telescope point spread function to a diffraction-limited pseudo-slit. We used the spectrograph to compare the modal noise performance of the HR to that exhibited by two reference devices: a single mode fibre and a multi-mode fibre which supported a similar number of guided modes. We also investigated the effect of mechanical shaking on the modal noise.
We used two methods to quantify the strength of the modal noise. In the first we used a spectrally smooth broadband source and a statistical analysis to quantify how the entire acquired spectrum changed as a result of different input coupling conditions to simulate the effect of telescope slewing and tracking during an exposure. Using this method, we observed that the modal noise performance of the HR when shaken was a factor of ≈ 6 better than that observed using the MM fibre when shaken, but a factor of ≈ 2 worse than when using the SM fibre.
In the second, we used a broadband source consisting of multiple spectrally narrow peaks to quantify how the barycentres of the peaks shift as a result of different input coupling conditions. In this case, we observed that the modal noise performance of the HR when shaken was identical to that of the SM fibre, but we again highlight that this merely indicates that the HR when shaken exhibits a level of modal noise that is not detectable using our experimental system and the barycentre method we have described.
Finally, looking forward to science applications, we have considered the relevance of our modal noise characterisation tests in the context of NIR radial velocity measurements, concluding that HR devices could offer a powerful route to combine high throughput efficiencies enabled by multi-mode operation with high precision spectroscopy through strong modal noise mitigation. Step 1 is to select a raw image of 250 rows.
Step 2 is to generate the total spectrum and a partial spectrum from a 10-row block.
Step 3 is to perform a cross-correlation between these spectra. The lag corresponding to the maximum value is the row shift for that block and particular image.
Step 4 is to repeat this for 25 blocks and 60 images to generate a matrix. The average row shift for each block is calculated and rounded to the nearest integer.
Step 5 is to apply the row shifts for each block consistently for all images.
when shifted caused some errors on the edges of the images, so the outer 5 columns on each side were truncated to ensure no unnecessary errors in the spectra were introduced by the data processing. For each DUT, to determine the relationship between pixel number and wavelength a tuneable laser (Anritsu MG9638A) was scanned across the detector in 1 nm increments. A straight line fit then gave the conversion factor (since the small wavelength range does not introduce nonlinear dispersion). | 8,661.8 | 2020-01-24T00:00:00.000 | [
"Physics",
"Engineering"
] |
Optical multistability and Fano line-shape control via mode coupling in whispering-gallery-mode microresonator optomechanics
We study a three-mode (i.e., a clockwise mode, a counterclockwise mode, and a mechanical mode) coherent coupling regime of the optical whispering-gallery-mode (WGM) microresonator optomechanical system by considering a pair of counterpropagating modes in a general case. The WGM microresonator is coherently driven by a strong control laser field and a relatively weak probe laser field via a tapered fiber. The system parameters utilized to explore this process correspond to experimentally demonstrated values in the WGM microresonator optomechanical systems. By properly adjusting the coupling rate of these two counterpropagating modes in the WGM microresonator, the steady-state displacement behaviors of the mechanical oscillation and the normalized power transmission and reflection spectra of the output fields are analyzed in detail. It is found that the mode coupling plays a crucial role in rich line-shape structures. Some interesting phenomena of the system, including optical multistability and sharp asymmetric Fano-shape optomechanically induced transparency (OMIT), can be generated with a large degree of control and tunability. Our obtained results in this study can be used for designing efficient all-optical switching and high-sensitivity sensor.
. Schematic diagram of the WGM microresonator optomechanical system consisting of a tapered fiber and a WGM ring microresonator which contains a mechanical breathing mode with resonance frequency Ω m . Two degenerate counterpropagating modes are respectively labeled as â (CCW) and b (CW) with the same frequency ω. Because of internal defect centers or surface roughness, these two modes are coupled to each other at a rate J, which is known as the so-called mode coupling. The intrinsic loss of the cavity fields is denoted by κ i and the waveguide-cavity coupling strength is κ ex . The CCW mode is driven by an external input field a in including a strong control field and a weak probe field with field strengths ε c and ε p as well as carrier frequencies ω c and ω p . The output fields are described by a out and b out , respectively. See text for details.
Scientific RepoRts | 7:39781 | DOI: 10.1038/srep39781 Results Theoretical model. As schematically shown in Fig. 1, we consider a microresonator optomechanical system, which consists of a WGM microresonator containing a mechanical breathing mode and a tapered fiber. As shown in ref. 27, the WGM microresonator can support counterclockwise (CCW) and clockwise (CW) propagating modes, which are described in terms of the annihilation (creation) operators â ˆ † a ( ) and b ˆ † b ( ) with a common frequency ω.
Because of residual scattering of light at the surface or in the bulk glass, the two CCW and CW propagating modes are coupled to each other at a rate J. At the same time, these two modes interact with the mechanical radial breathing mode through the radiation pressure, where the optomechanical coupling strength between the optical modes and the mechanical mode is characterized by G. The two CCW and CW modes are side-coupled to a tapered fiber by the evanescent field which is determined by the propagating direction of the light in the coupling region. We assume that the CCW mode in the WGM microresonator (see Fig. 1) is coherently driven by an external input laser field consisting of a strong control field and a relatively weak probe field, denoted by with the field strengths (carrier frequencies) ε c and ε p (ω c and ω p ). The field strengths ε c and ε p are normalized to a photon flux at the input of the microresonator and are defined as ε ω = P / c c c and ε ω = P / p p p , where P c and P p are the powers of the control field and the probe field, respectively. Without loss of generality, we assume that ε p and ε c are real. Experimentally, ε p is usually chosen to be much smaller than ε c . More information on the device and experimental details can be found in ref. 27 and supporting online material accompanying ref. 27. The Hamiltonian of the whole system is given by where x and p are the position and momentum operators of the mechanical oscillator with the effective mass m and resonance frequency Ω m , satisfying the relationship =x p i [ , ] . The optomechanical coupling constant G between the mechanical and cavity modes can be defined as G = − ∂ ω/∂ x, which is determined by the shift of the cavity resonance frequency per the displacement of the mechanical resonator 2 . The total decay rate of the WGM microresonator mode (the microresonator linewidth) is denoted by κ = κ i + κ ex , where κ i is the intrinsic decay rate, related to the intrinsic quality factors Q i by κ i = ω/Q i and κ ex is the external decay rate (the outgoing coupling coefficient) from the optical resonator into the tapered fiber, related to the coupling quality factor Q e by κ e = ω/Q e . The total decay rate κ is related to the total quality factor Q by κ = ω/Q. Obviously, 1/Q = 1/Q i + 1/Q e . Various techniques have been reported for changing Q dynamically [65][66][67] . The outgoing coupling coefficient η c = κ ex /κ can be used to measure the cavity loading degree. (i) If η c < 0.5, the WGM microresonator is in the under-coupling regime. (ii) If η c = 0.5, the WGM microresonator is in the critical-coupling regime. (iii) If η c > 0.5, the WGM microresonator is in the over-coupling regime. The outgoing coupling coefficient η c can be continuously tuned by changing the air gap between the WGM microresonator and tapered fiber 27 . Finally, it should be pointed out that the coupling between the CW and CCW modes is usually caused by residual scattering of light at the surface or in the bulk glass as well as the case when there are interruptions (such as nanoparticles). Thus the surface roughness or internal defect center in the WGM microresonator is the critical point to make the coupling between the CCW and CW modes. These factors above may be used to control and tune the coupling rate J. Note that both J and κ can be controlled independently in actual systems.
In the above Hamiltonian (1), the first and second terms represent the energies of the mechanical oscillator. The third term is the energy of the WGM microresonator. The fourth term describes the coherent coupling of the CCW mode â with the CW mode b , i.e., the so-called mode coupling term. The fifth term presents the optomechanical coupling due to the radiation pressure with the coupling strength G. The last two terms in Eq. (1) describe the interactions between the cavity field and the two input fields, respectively.
Controlled optical bistability and multistability in WGM microresonator optomechanical system.
In this section, as the first insight, we will show how optical bistability and multistability for the displacement of the mechanical resonator can be modified and controlled by the mode coupling rate J under the action of the strong control field in our proposed scheme. When the coupled system is strongly driven, it can be characterized by the semiclassical steady-state solutions with large amplitudes for both optical and mechanical modes. In view of this, by calculating Eqs (8-10) of the Methods under steady-state conditions, we have the result for x 11,[39][40][41] . The reason for this is that the two CW and CCW modes simultaneously coupled with the mechanical oscillator. We plot the stationary value for the displacement of the mechanical resonator x as a function of the power of the control field P c for the six different values of the coupling rate J between the two CW and CCW modes as shown in Fig. 2(a-f). First of all, from Fig. 2(a) corresponding to the case of J = 0, it is easy to see that an S-shaped behavior of the displacement of the mechanical resonator can be formed efficiently, that is to say, the coupled system exhibits the bistable behavior where the largest and smallest roots of x are stable, and the middle one is unstable. Note that, such an optical bistability has been investigated in the previous optomechanical systems 11,[39][40][41] . In this situation, the system only has a single bistable window. This is because only one CCW mode of the GWM microresonator is coupled with the mechanical resonator while the other CW mode coupled with the mechanical resonator is usually neglected 27 . When considering J = 0.05 Ω m in Fig. 2(b), it is expected that optical bistable behavior is not changed almost, due to a significantly small increased value of the mode coupling rate J. Second, as we continue to increase the coupling rate J between these two CW and CCW modes, e.g., J = 0.1 Ω m and 0.2 Ω m , optical multistable behavior begins to appear, as can be seen in Fig. 2 Fig. 2(e), the coupled system obviously displays optical multistabile behavior. Lastly, with further increasing J (for example, J = 0.45 Ω m ), the system also exhibits the multistability consisting of the two separated bistable windows as depicted in Fig. 2(f). It is shown from Fig. 2(a-f) that the steady-state response of the mechanical resonator may be bistable and tristable, depending strongly on the value of the mode coupling rate J.
(c) and (d). For the case that
Here it is worth pointing out that in the region with three solutions, two of them are stable by a standard linear stability analysis 68 . While in the region with five solutions, three of them are stable. As a consequence, these results for x represent bistable [see Fig. 2(a,b)] and tristable [see Fig. 2(d-f)] regimes, respectively. Finally, for the threshold values of J between the bistable and triplestable regimes, they are too cumbersome to be given here.
According to what has been discussed above, we can arrive at the conclusion that the generated bistability and tristability in Fig. 2 are closely related to the mode coupling rate J. The WGM microresonator optomechanics we consider here enables more controllability in the bistable and tristable behaviors of the displacement of the mechanical resonator by appropriately adjusting the coupling rate J between the two modes of the GWM microresonator. From an experimental point of view, the controlled triple-state switching is possible practically by adding a pulse sequence onto the input field 69 . Such an optical tristability can be used for building all-optical switches, logic-gate devices, and memory devices for optical computing and quantum information processing.
Controlled sharp asymmetric Fano resonance OMIT line-shapes in WGM microresonator optomechanical system. The Fano resonance, which has a pronounced sharp asymmetric line-shape profile, is remarkably different from the above-mentioned symmetric OMIT spectral profile 11,27 . Due to its sharp asymmetric line-shape, any small changes in the considered Fano system are able to cause the huge change of both the amplitude and phase. Consequently, the possibility of controlling and tuning the Fano resonance is a functionality of key relevance. In this section, we look at the effect of various system parameters on the asymmetric Fano resonance line-shapes of the normalized power forward transmission T F and backward reflection T B at the output of the device. The detailed results are given in Figs 3, 4, 5 and 6.
First of all, we start by exploring how the line-shapes of the Fano resonance can be modified by varying the coupling rate J between the two CW and CCW counterpropagating modes. Figure 3 shows the normalized power forward transmission T F and backward reflection T B as a function of the detuning Ω (Ω ≡ ω p − ω c , in units of Ω m ) under the four different values of the mode coupling rates J based on the obtained analytical expressions (27) and (28) of the Methods. We use the following parameter values ε p /ε c = 0.05, P c = 10 mW, η c = 0.5, and the other system parameters are exactly the same as those in Fig. 2. Figure 3(a) shows the normalized power forward transmission T F and backward reflection T B versus the detuning Ω for the case of J = 0, which means that the two counterpropagating modes are not coupled to each other. It can be seen from Fig. 3(a) that the normalized power forward transmission T F (see the blue line) has a single transparent peak in the center of Ω = Ω m and two dips on both sides, which exhibits a symmetric dip-peak-dip spectral structure in the forward transmission. This phenomenon shows an obvious OMIT effect, which has been intensively studied in the previous works [27][28][29][30][31][32][33][34][35] . In the meantime, the normalized power backward reflection T B (see the red dot line) is zero, as predicted. This is because the CW mode b (see Fig. 1) is not introduced at all when J = 0. It also corresponds to the same situation as that in Fig. 2(a) where the optomechanical system possesses only one single bistability. Figure 3(b-d) display the normalized power forward transmission T F and backward reflection T B vary with the detuning Ω when the mode coupling rate J is not equal to zero. Specifically, in Fig. 3(b) the mode coupling rate J is not big enough, i.e., J = 0.2 Ω m , so the Fano resonance has not yet emerged in this case. While when J = 0.4 Ω m in Fig. 3(c) and J = 0.6 Ω m in Fig. 3(d), the system displays rich line-shape structures. A typical asymmetric Fano line-shapes can be generated efficiently near Ω = Ω m . Physically, the underlying mechanism for generating such Fano resonances is the destructive interference between the forth and back reflections of the optical field through different pathways due to the fact that the WGM microresonator introduces the backward propagating lights via the mode coupling term, i.e., +ˆˆ † † J a b ab ( ) in Eq. (1). By comparing Fig. 3(c) and (d) where the mode coupling rates J respectively are 0.4 Ω m and 0.6 Ω m , it is found that the Fano line-shapes change distinctly with the increase of the mode coupling rate J. In particular, when we increase the mode coupling rate J to a large value of 0.6 Ω m , the dip of the Fano resonance in the normalized power forward transmission T F is decreased considerably. We can observe an enhanced Fano line-shape with a resonance maximum (peak) at Ω = Ω m and a resonance minimum (dip) at Ω = 1.013 Ω m . The spectral width between the peak and the dip of the Fano is Δ Ω = 0.013 Ω m . In our Fano optomechanical system, the forward transmission contrast of the Fano response is as high as approximately 54% [see Fig. 3(d)], which is sufficient for any telecom system 70 . Correspondingly, the peak of the Fano resonance in the normalized power backward reflection T B is increased due to the energy conservation. At the same time, the spectra of the normalized power forward transmission T F and backward reflection T B expand outwards the resonance peak. Hence, we are able to effectively control and tune the Fano line-shapes by appropriately adjusting the coupling rate J between the two counterpropagating modes. As shown in ref. 71, the effective mode coupling in the WGM microresonator optomechanical system is usually introduced by internal defect centers or surface roughness. Thus these factors (i.e., manipulating the internal defect centers or surface roughness experimentally) may be used to control and tune the coupling rate J between the two counterpropagating modes. In view of rapid advances in micro-nano manufacture technology, we believe that quantitative control of J will be accessible in experiments in the near future.
Next, we demonstrate that the line-shapes of the Fano resonance can also be manipulated by varying the power of the control field P c . Figure 4 shows the normalized power forward transmission T F and backward reflection T B as a function of the detuning Ω for the four different values of the power of the control field P c . In order to illustrate the dependence of the Fano line-shapes on the power of the control field P c , we keep these parameters J = 0.6 Ω m , ε p /ε c = 0.05, and η c = 0.5 fixed, and the other system parameters are exactly the same as those in Fig. 2. In Fig. 4(a), we plot the spectra of the normalized power forward transmission T F and backward reflection T B for the case that a control filed of P c = 10 μW is applied. It can be seen from Fig. 4(a) that one can observe an asymmetric Fano-shape OMIT resonance after zooming the figure but the Fano phenomenon is very weak because the power of the control field is quite small. For the case of P c = 100 μW in Fig. 4(b), on closer inspection, one can see that a very weak Fano resonance around Ω = Ω m starts to appear in the generated transmission and reflection spectra. Interestingly, when the power of the control field is further increased to the values of P c = 1 mW and 10 mW, we find a pronounced sharp asymmetrical dip (peak) in the transmission (reflection) spectrum, which we identify as the Fano resonance. The Fano resonance becomes more and more stronger, as can be verified from Fig. 4(c,d). From these figures, it is evident that the sharp asymmetric line-shape of the Fano resonance can be formed under the stronger powers of the control field. Alternatively, in Fig. 4 it is also shown that the spectral profiles of the normalized power forward transmission T F and backward reflection T B cannot expand outwards the resonance peak, only the heights from the peak-to-dip of the Fano resonance increase gradually, as compared to the results in Fig. 3.
Finally, we turn to discuss how the line-shapes of the Fano resonance can be controlled by the outgoing coupling coefficient η c and the optomechanical coupling strength G in the presence of the mode coupling. In Fig. 5, we first show that the Fano line-shape can be tuned by properly changing the outgoing coupling coefficient η c under three cavity loading conditions [under (κ ex < κ i or η c < 0.5), critical (κ ex = κ i or η c = 0.5), and over (κ ex > κ i or η c > 0.5) coupling conditions]. Figure 5(a) plots the normalized power forward transmission spectrum T F as a function of the detuning Ω for the three different values of the outgoing coupling coefficient η c . Specifically, for the case of the outgoing coupling coefficient η c = 0.1 in Fig. 5(a), the normalized power forward transmission spectrum of T F is a symmetric W-type of double Lorentzian-like line-shape. As the outgoing coupling coefficient η c is increased, for example, η c = 0.5 and 0.8, the asymmetric Fano resonance becomes increasingly obvious. For the sake of clarity, the inset in Fig. 5(a) shows a magnified view of Fano line-shapes near Ω = Ω m in a smaller region. Likewise, Fig. 5(b) presents the normalized power backward reflection spectrum T B varies with the detuning Ω for the three different values of the outgoing coupling coefficient η c . Compared with Fig. 5(a), we can find that the pattern of the Fano resonance is inverted and the sharp peaks appear on the right side of the resonance dip. This is due to different phase shifts between the two resonance modes in the WGM microresonator. Such a situation also occurs in Figs 3 and 4. Lastly, Fig. 6 shows the tunable line-shapes of the Fano resonance by varying the optomechanical coupling strength G. In Fig. 6(a) and(b), we can observe the variation of the peak-to-dip spectral spacing by adjusting the optomechanical coupling strength G. When the absolute value of coupling coefficient G increases, the peak-to-dip spectral spacing increases gradually.
Overall, in view of these detailed discussions above, we can reach the conclusion that the coherent coupling of the two counterpropagating modes, which is neglected in the previous studies 11,27 , plays a key role in the generation of asymmetric Fano resonance line-shape. Here the on-chip WGM microresonator optomechanical system provides an easy and robust way to tune and control Fano resonance spectrum by simply changing the experimentally achievable parameters, such as the mode coupling rate J, the power of the control field P c , the outgoing coupling coefficient η c , and the optomechanical coupling strength G. All of these system parameters are simple and flexible to implement our proposed arrangement. This Fano resonance control will be useful for enhancing the sensitivity of the sensors and designing low-power all-optical switches 54 .
Discussion
We have proposed a fully on-chip scheme for generating and controlling optical multistability and sharp asymmetric Fano resonance OMIT line-shapes in a three-mode microresonator optomechanical system. The WGM microresonator is driven by an external two-tone laser field consisting of a strong control field and a relatively weak probe field via a tapered fiber. In our model, we consider the two stationary modes cannot be resolved and hence both of them are populated when J ≤ κ or we assume the condition of J > κ but not κ J (i.e., the transition coupling region κ < J < 3κ), which are quite different from the previous approach in ref. 27. There are two main results in our study. First, by solving the coupled Heisenberg-Langevin equations and analyzing the stationary state solution, we find that the coupling rate of the two counterpropagating modes, that is parameterized by J, plays a key role in manipulating optical multistable properties for the displacement of the mechanical motion. When J is very small, there has only one bistable region. Importantly, the bistability can turn into the multistability in the beginning and then becomes the two separated bistable regions with increasing J. Second, by using the standard input-output relation and the perturbation method, we analyze in detail the normalized power transmission and reflection spectra of the weak probe laser field. With readily accessible system parameters, we observe the sharp asymmetric Fano resonance line-shapes, which originates from the interference between the forth and back reflections of the optical filed in different pathways. In addition, the sharp asymmetric Fano spectral profile can be controlled and tuned by appropriately changing the mode coupling rate between the two counterpropagating modes, the distance between the cavity and the tapered fiber, the power of the control field, and the optomechanical coupling strength between the counterpropagating modes and mechanical mode, respectively. The scheme could be realized with current physical technology 27 and all of these system parameters can be adjusted readily under realistic experimental conditions. This investigation may provide new insights into the aspects of the interaction between the WGM microresonator and the mechanical motion. Also, our results will be helpful in practical applications, such as all-optical switches, modulators, and high-sensitivity sensors, etc.
Methods
Derivation the normalized power forward transmission T F and backward reflection T B of the probe field. Transforming the above Hamiltonian [Eq. (1)] into the rotating frame at the frequency ω c of the control field by means of , respectively. In order to further solve this set of the coupled equations (11)(12)(13), following the method of ref. 27, we introduce the following ansatz for the fluctuation parts of the intracavity field and the displacement of the mechanical mode: i t i t i t i t 1 1 Upon substituting Eqs (14)(15)(16) into Eqs (11)(12)(13) and sorting them by the rotation term e ±iΩt , this yields the following five algebra equations: , respectively. It is worth noticing here that the steady-state values a and b are governed by Eqs (8-10). From the above Eqs (17)(18)(19)(20)(21), after tedious but straightforward calculations, the solutions for X 1 , − A 1 , and − B 1 can be derived explicitly as 1 describe the optical responses at the control-field frequency ω c , the probe-field frequency ω p , and the new frequency 2ω c − ω p for the forward (backward) direction output field, respectively. That is to say, from the physical point of view, the expressions (25) and (26) reveal that the forward and backward direction output fields contains two input frequency components (the control field with ω c and the probe field with ω p ) and one additional frequency component (also called Stokes field 11 ) with 2ω c − ω p . In the following, we only focus on the output component at the frequency of the weak probe field like in ref. 27. One also can study the features of the output fields at the frequencies of the control and Stokes fields in an analog way, while the corresponding results are not shown here due to the space limitation.
Hence, the normalized power forward transmission T F and backward reflection T B of the probe field can be expressed as where we have defined T F = |c pF /ε p | 2 and T B = |c pB /ε p | 2 for convenience, respectively. Equations (27) and (28) are the central results of this paper. | 5,979.4 | 2017-01-03T00:00:00.000 | [
"Physics"
] |
Use of laser tweezers to analyze sperm motility and mitochondrial membrane potential
We combine laser tweezers with custom computer tracking software and robotics to analyze the motility [swimming speed, VCL (curvilinear velocity), and swimming force in terms of escape laser power (Pesc)] and energetics [mitochondrial membrane potential (MP)] of individual sperm. Domestic dog sperm are labeled with a cationic fluorescent probe, DiOC 2 (3), that reports the MP across the inner membrane of the mitochondria located in the sperm’s midpiece. Individual sperm are tracked to calculate VCL. Pesc is measured by reducing the laser power after the sperm is trapped using laser tweezers until the sperm is capable of escaping the trap. The MP is measured every second over a 5-s interval during the tracking phase (sperm is swimming freely) and continuously during the trapping phase. The effect of the fluorescent probe on sperm motility is addressed. The sensitivity of the probe is measured by assessing the effects of a mitochondrial uncoupling agent (CCCP) on MP of free swimming sperm. The effects of prolonged exposed to the laser tweezers on VCL and MP are analyzed. The system’s capabilities are demonstrated by measuring VCL, Pesc, and MP simultaneously for individual sperm. This combination of imaging tools is useful to quantitatively assess sperm quality and viability.
Introduction
Quantitative and objective techniques are important for assessing sperm quality. Computerassisted sperm analysis (CASA) systems have been developed to measure parameters such as curvilinear velocity (VCL), amplitude of lateral head movement, and percent of motile sperm, providing quantitative information about the overall motility of a sperm population. 1,2 In addition, flow cytometry in combination with fluorescent probes has been used to monitor mitochondrial membrane potential (MP) in sperm cells. [3][4][5][6] MP, given by the Nernst equation, is dependent on the distribution of hydrogen protons across the inner mitochondrial membrane. This electrochemical proton gradient drives the synthesis of ATP that is used for energy by the cell. Therefore, the fluorescence intensity of cyanine dyes, such as 3,3′-diethyloxacarbocyanine iodide [DiOC 2 (3)], which increases as the magnitude of MP increases, is an indicator of the energetic state of the cell. Studies have demonstrated that high MP in sperm correlates with increased motility 3 as well as high-fertility performance. [4][5][6] Several fluorescent probes are available, and comparisons between probes have been performed. 4,7 Specifically, Novo et al. 7 showed that the ratiometric technique for estimation of MP using DiOC 2 (3) was an accurate indicator of bacterial MP.
Single spot, gradient force laser tweezers is another tool that has been used to study sperm motility by measuring sperm swimming force. It has been shown 8 that the minimum laser power needed to hold a sperm in the optical trap (threshold escape power) is directly proportional to the sperm's swimming force (F=Q × P/c, where F is the swimming force in Newtons, P is the laser power in Watts, c is the speed of light in the medium with a given index of refraction, and Q is the geometrically determined trapping efficiency parameter). Previous studies have demonstrated a positive correlation between sperm swimming speed and escape laser power. [9][10][11] Optical traps have also been used 12 in combination with the fluorescent probe JC-1 (5,5′ , 6,6′-tetra-chloro-1,1′, 3,3′-tetraethylbenzimidazolylcarbocyanine iodide). That study measured MP as the sperm were held in the laser trap. A major drawback, however, was the inability to determine not only the mitochondrial MP of the individual sperm before or after it was exposed to the laser tweezers, but also the sperm's swimming speed and/or swimming force.
Recently, computer tracking software and robotics were combined with the laser tweezers system to automate sperm trapping experiments. 13,14 This custom-designed real-time, automated, tracking and trapping system, or RATTS, presents itself as a potentially useful tool, in addition to CASA systems and flow cytometry, to assist in overall sperm quality assessment. RATTS has been modified to measure mitochondrial MP (prior to, during, and after trapping) in conjunction with swimming speed and escape laser power of individual sperm. 15 In this paper, we describe the modification of RATTS to analyze domestic dog sperm labeled with the fluorescent probe DiOC 2 (3). The effects of the probe on sperm motility are studied. The probe's, as well as the system's, ability to monitor changes in MP is quantified. The effects of prolonged exposed to the laser tweezers on VCL and MP are analyzed. Finally, the system's capabilities are demonstrated by simultaneously measuring VCL, Pesc (swimming force in terms of escape laser power), and MP for individual sperm. The results show that the combination of laser tweezers, robotics, and the measurement of mitochondrial MP creates a system that is capable of providing a detailed description of individual sperm, including both motility and energetic.
Specimen
Semen samples collected from several domestic dogs were cryogenically frozen according to a standard protocol. 16,17 Studies on human sperm have shown that properly freezing, storing, and thawing sperm has no significant effect on escape force. 18 Furthermore, we compared frozen-thawed and fresh dog sperm from the same semen sample and found that the swimming speed and escape laser power distributions were statistically the same (swimming speed: P>0.06; escape power: P>0.9, data not shown). Therefore, frozen-thawed semen samples in this study are considered comparable to fresh samples.
For each experiment, a sperm sample is thawed in a water bath (37 °C) for approximately 1 min and its contents are transferred to an Eppendorf centrifuge tube. The sample is centrifuged at 2000 rpm for 10 min (the centrifuge tip radius is 8.23 cm). The supernatant is removed and the remaining sperm pellet is suspended in 1mL of pre-warmed media [1 mg of bovine serum albumin (BSA) per 1 mL of Biggers, Whittens, and Whittingham (BWW), osmolality of 270 to 300 mmol/kg water, 19 pH of 7.2 to 7.4]. Note that this media is noncapacitating, as it has a low concentration of bicarbonate 20 (4 mM). Therefore, the sperm do not achieve hyperactivity.
To monitor the voltage potential across the inner membrane of the mitochondria, sperm are labeled with DiOC 2 (3) (3,3′-dithyloxacarbocyanine, 30 nM final dye concentration, Molecular Probes, Invitrogen Corp., Carlsbad, California). DiOC 2 (3) is a cationic cyanine dye that primarily accumulates in the mitochondria of a cell in response to the electro chemical proton gradient, or MP. The probe emits both a red and green fluorescence. The ratiometric parameter (red/green intensity) is a size-independent measure of MP, as the green fluorescence varies with size and red fluorescence is dependent 21 on both size and MP. After the dye is added, the cells are incubated for 20 min in a 37 °C water bath and then centrifuged for 10 min (2000 rpm). The pellet is suspended in the media by "flicking" the tube according to the protocol for the MitoProbe assay kit (Invitrogen Corp.) for flow cytometry. To test sensitivity of both the probe and the system to changes in MP (see Sec. 2.4), an aliquot of sperm are exposed to the proton ionophore CCCP (carbonyl cyanide 3chlorophenylhydrazone, 50 µM final concentration, Molecular Probes, Invitrogen Corp., Carlsbad, California), which is known 7,22 to decrease the magnitude of the MP. CCCP and DiOC 2 (3) are added to the sperm simultaneously.
Final dilutions of ~30,000 sperm/mL of media are used in the experiments. The sperm dilution is loaded into a 3 mL Rose tissue culture chamber and mounted into a microscope stage holder according to previously described methods. 23 The sample is kept at 37 °C using an air curtain incubator (NEVTEK, ASI 400 Air Stream Incubator, Burnsville, Virginia). A thermocouple is attached to the Rose chamber to ensure temperature stability.
Hardware, Software, and Optical Design
The optical system, shown in Fig. 1(a), is adapted after Nascimento, et al. 10 A single point gradient trap is generated using an Nd:YVO 4 continuous wave 1064-nm wavelength laser (Spectra Physics, BL-106C, Mountain View, California), coupled into a Zeiss Axiovert S100 microscope equipped with a phase III, 40×, 1.3 numerical aperture (NA), oil immersion objective (Zeiss, Thornwood, New York). The laser power in the specimen plane is attenuated by rotating the polarizer, which is mounted in a stepper-motor-controlled rotating mount (Newport Corporation, Model PR50PP, Irvine, California).
The imaging setup, shown in Fig. 1(b), was adapted after Mei et al. 12 Two dual video adapters are used to incorporate the laser into the microscope and simultaneously image the sperm in phase contrast and fluorescence. The laser beam enters the side port of the first dual video adapter and is transmitted to the microscope. A filter (Chroma Technology Corp., Model E700SP-2P, Rockingham, Vermont) is used to prevent back reflections of IR laser light from exiting the top port of the adapter but allow reflected visible light coming from the specimen to pass to the second video adapter. The specimen is viewed in phase contrast using red light filtered from the halogen lamp (Chroma Technology Corp., Model D680/60 X) and in fluorescence using the arc lamp (Zeiss FluoArc). The fluorescence filter cube contains an HQ 500/20-nm excitation filter and a dichroic beamsplitter with a 505-nm cuton wave-length. The second dual video adapter attached to the top port of the first video adapter uses a filter cube to separate the phase information (reflects >670 nm) from the fluorescence (transmits 500 to 670 nm). The phase contrast images are filtered through a filter (Chroma Technology Corp., Model HQ 675/50M) and acquired by a CCD camera (Cohu, Model 7800, San Diego, California, operating at 40 frames/s) coupled to a variable zoom lens system (0.33 to 1.6× magnification) to increase the field of view. For the fluorescent images, a Dual-View system (Optical-Insights, Tucson, Arizona) splits the red and green fluorescent light emitted by the specimen to produce a copy of the image for each color. Fluorescent emission filters are placed in this emission-splitting system (green fluorescence emitter: HQ 535/40-nm M filter; red fluorescence emitter: HQ 605/50-nm M filter, Chroma Technology Corp.). The Dual-View system is coupled to a digital camera (Quantix 57, Roper Scientific Inc., Tucson, Arizona) that captures the fluorescent images.
The hardware and software to perform the experiments in this paper are described in greater detail elsewhere. 15 Briefly, two computers are networked together. An upper-level computer that acquires and displays the images from the Cohu CCD is responsible for tracking and trapping the sperm of interest. The lower-level computer is prompted by the upper-level computer to acquire the fluorescent images of the sperm's mitochondria from the Quantix CCD. From the image, the lower level computer calculates the ratio (red/green) value.
Once the user selects the sperm of interest, the upper level computer tracks the sperm and calculates the VCL (in micrometers per second) in real time. Note that other motility parameters including lateral head movement, straight line velocity, and smoothed-path velocity that are typical of a CASA system are also calculated. 10,13,14 Nascimento et al. found that several of these parameters had near equal influence on the variability in the data set. 10 Therefore, for the purpose of this paper, only VCL is used to describe sperm swimming speed, as it is a more comprehensive parameter, accounting for both forward progression and lateral head movement. 11 During this tracking phase, the microscope stage moves the sperm to the center of the field of view if it nears the edge. In addition, the sperm is relocated to a defined (x, y) coordinate every second and a command is sent to the lowerlevel computer to acquire a fluorescent image. For track-and-trap experiments, the sperm is relocated to the laser trap coordinates after being tracked and fluorescently imaged for 5s. Once the sperm is successfully trapped by the laser tweezers, the lower-level computer continuously acquires fluorescent images. Laser power is either kept constant (see Sec. 2.5) or is attenuated (see Sec. 2.6). If the power is attenuated, the power at which the sperm is capable of escaping the trap (Pesc, in milliwatts) is recorded by the upper-level computer. The sperm is tracked and fluorescently imaged for an additional 5s once it is released from or escapes the optical trap. Data for each tracked sperm, including (x, y) trajectory coordinates in the field of view, stage movement, instantaneous VCL, and average VCL, for each time point is saved to a file on the upper-level computer. The fluorescent dataincluding the average, maximum, and minimum ratio values-for each image are saved to a file on the lower-level computer.
Effect of Probe on Sperm Motility
The effects of DiOC 2 (3) on sperm motility are assessed. The swimming speed (VCL in micrometers per second) distribution of sperm exposed to DiOC 2 (3) is compared to that of sperm not exposed to the probe (control). Sperm from each group are analyzed during the same time intervals. During the first time interval, both groups are viewed in phase contrast microscopy only. During the second time interval, again, both groups are viewed in phase contrast microscopy. However, the test group exposed to DiOC 2 (3) are also illuminated with excitation light (500 nm) from the arc lamp.
Sensitivity to Changes in MP
The ability to measure changes in MP as well as the ability to detect and report changes in MP is tested. The test sperm group are exposed to both CCCP (50 µM) and DiOC 2 (3) (30 nM), whereas the control sperm group are labeled only with DiOC 2 (3) (30 nM). Sperm are loaded onto the micro scope and tracked for 10 s. A fluorescent image is acquired every second. The ratio value distributions of the test and control groups are compared.
Track, Trap (Constant Power and Constant Duration), and Fluorescently Image
Sperm labeled with DiOC 2 (3) are tracked and trapped under constant power (460 mW) for a constant duration (90 s). For each sperm analyzed, fluorescent images are acquired approximately once every second during the 5s prior to and after trapping and acquired continuously once the sperm is in the trap. Effects of prolonged exposure to the laser tweezers on MP and swimming speed (VCL in micrometers per second) are assessed.
Track, Trap (Decaying Laser Power), and Fluorescently Image
Sperm labeled with DiOC 2 (3) are tracked and trapped under decaying power. Again, for each sperm, fluorescent images are acquired once every second during the 5s prior to and post trapping and acquired continuously once the sperm is in the trap. Examples of the various sperm responses to the optical trap are described.
Effect of Probe on Sperm Motility
The swimming speed (VCL in micrometers per second) distribution of sperm cells exposed to DiOC 2 (3) is compared to that of the control sperm using the Wilcoxon paired-sample test (the distributions are found not to be Gaussian, thus requiring the non-parametric test). The VCL distributions are found to be statistically equal, even when the probe is activated by the arc lamp (without arc lamp illumination; P >0.3, N control =24, N DiOC 2 (3)=37; with arc lamp illumination; P>0.2, N control =23, N DiOC 2 (3)=19). Thus, DiOC 2 (3) does not adversely affect sperm motility.
Sensitivity to Changes in MP
Since the probe used in this study is typically applied in flow cytometry experiments, we wanted to verify that our custom system and method of analysis is sensitive to changes in MP. We also wanted to verify that this probe reports changes in MP. Figure 2 shows the ratio value over a 10-s interval for sperm from the test group (with CCCP) and control group (without CCCP). The figure demonstrates that CCCP does indeed cause a decrease in MP and that both the probe and the system are capable of reporting such a decrease. The average ratio value of sperm exposed to CCCP (3.74±0.75, NDiOC 2(3) =33) versus that of the control sperm (5.82±0.41, N control =36) is found to be statistically significantly different (P ⪡ 0.001) using the Student's T test (distributions are found to be Gaussian). The velocity of each sperm was also measured. The average VCL value of sperm exposed to CCCP (67.15±20.77) versus that of the control sperm (70.39±24.70) is found to be statistically the same (P>0.56) using the Student's T test (distributions are found to be Gaussian). Figure 3 shows the ratio value prior to trapping, during trap, and after trapping plotted over time for two different sperm. For the sperm in Fig. 3(a), there is an overall decline in ratio value over time as the sperm is held in the trap. Once released from the optical trap, the sperm's ratio value does increase, however, it does not fully recover within 5s to the original value it was prior to being trapped. Similarly, the sperm's swimming speed, VCL, does not recover to its pre-trapping value. For the sperm in Fig. 3(b), there is a slight decrease in ratio value while the sperm is in the trap. Again, neither swimming speed nor ratio value fully recover post trap to the pre-trapping values. A previous study had shown that trapping sperm for 15 s at a constant power of 420 mW in the focal volume had a negative effect on sperm motility. 10 The results reported here are consistent with those findings.
Track, Trap (Decaying Laser Power), and Fluorescently Image
Pesc was plotted against VCL (data not shown) and showed the same positive correlation between the two parameters as found in previous studies 10 (regressions applied to data sets found to be statistically equal, P >0.2). Figure 4 plots the ratio value over time for four sperm for the three different phases: prior to trapping, during trap, and after trapping. These four examples demonstrate the various responses the sperm have to the optical trap. The sperm in Fig. 4(a) did not escape the trap. After 10 s, the trapping power reaches the minimum 3.8 mW, at which point the trap turns off. This sperm's VCL slightly increased after being trapped, but the average ratio value decreased. The sperm in Fig. 4(b) escaped the trap at 55 mW and had an approximate 18% increase in VCL after trapping. However, the average ratio value was nearly the same after trapping as it was prior to trapping. The sperm in Fig. 4(c), although it escaped at a relatively high power, had a significant decrease in VCL, yet the average ratio value increased slightly. The sperm in Fig. 4(d), which escaped the trap at 26 mW, also had a decrease in VCL and an increase in average ratio value.
Discussion
In this paper, a mitochondrial membrane potential probe was used in combination with a custom-automated tracking and trapping system. We demonstrated how this technique can be applied to the study of sperm motility and energetics. Moreover, we created a protocol that can be used to compare various MP probes. First, the effects of the probe on sperm swimming speed are established. Specifically, DiOC 2 (3) was shown to not affect sperm swimming speed (VCL). Second, the probe's ability to report an expected decrease in MP was verified. The ratio measurement of the red to green fluorescent signal from DiOC 2 (3) showed a significant decrease in MP caused by the addition of the proton ionophore CCCP (see Fig. 2). This also demonstrates that the system's hardware and software, including the custom algorithms, are sensitive to changes in MP. Third, the system can then be used to monitor MP of individual sperm over a long period of time and assess the adverse effects of prolonged exposure to optical traps. As shown in Fig. 3, both VCL and MP values post-trapping are less than those prior to trapping. These results show that the ratio value can reflect the varying degrees of cell damage induced by the laser trap as well as partial cell recovery once the laser trap is turned off. Fourth, and finally, we have demonstrated how this system can be used to simultaneously measure sperm swimming speed, escape laser power, and sperm mitochondrial MP in real time.
In conclusion, we created a technique to quantitatively assess sperm quality and viability. We demonstrated how a combination of imaging and optical tools can be used to provide a detailed description of individual sperm by measuring not only sperm motility parameters, such as VCL and Pesc, but also mitochondrial MP. This system can therefore be used to address the relationship between mitochondrial respiration and motility. Knowing that there is indeed a relationship between VCL and Pesc, as found in a previous study, 10 one would expect that there would also be relationships between VCL and MP and/or Pesc and MP. However, to draw statistically significant conclusions regarding these relationships, more experiments must be conducted to achieve larger N values. More importantly, this system can be used to a gain better understanding of the role of oxidative phosphoyrlation in sperm cell motility. For example, the results found in this paper interestingly show no correlation between MP and sperm motility (in terms of VCL) when the sperm were exposed to CCCP. One would expect exposure to this ionophore would inhibit mitochondrial ATP production and thus reduce sperm motility. However, no such decrease in velocity was observed (the VCL of the control group was found to be statistically equal to that of the test group). This suggests that perhaps another pathway, such as glycolysis, which is known to occur along the sperm tail, or principal piece, 24-26 may be supporting motility when oxidative phosphorylation is inhibited. Future studies will assess the effects of various electron transport chain inhibitors, such as rotenone and antimycin A, as well as glycolytic inhibitors, such as 2-deoxy-D-glucose, on sperm mitochondrial MP, VCL, and Pesc. Effects of CCCP on mitochondrial membrane potential. The ratio value (red/green fluorescence) is plotted against time (in seconds). Ratio values are measured over a 10-s interval for test sperm group (with CCCP, in magenta) and control sperm group (without CCCP, in black). Each track represents an individual sperm. Track, trap (decaying laser power) and fluorescently image. The ratio value (red/green) is plotted against time (in seconds) for the three different phases: prior to trapping, during trap, and after trapping. Various escape powers and swimming speeds are represented by the four sperm. The average ratio value (AveRat) and VCL prior to (pre) and after trapping (post), as well as Pesc, are inset in the figure for each sperm (a) to (d). (Color online only.) | 5,198.2 | 2008-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
The alluaudite-like arsenate NaCaMg3(AsO4)3
The title compound, sodium calcium trimagnesium tris(arsenate), an alluaudite-like arsenate, was prepared by solid-state reaction at high temperature. The structure is built up from edge-sharing MgO6 octahedra in chains associated with the AsO4 arsenate groups. The three-dimensional network leads to two different tunnels occupied statistically by Na+ and Ca2+. One As and one Mg atom lie on twofold rotation axes; one Na and one Ca are disordered over two sites with occupancies of 0.7 and 0.3 and these sites lie on a twofold rotation axis and an inversion centre, respectively.
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: BR2069).
Comment
The crystal structure of NaCaMg 3 (AsO 4 ) 3 is closely related to the common structure type of the well known mineral Alluaudite with the general formula X(1)X(2)M(1)M(2) 2 (PO 4 ) 3 (Moore, 1971;Yakubovitch et al., 1994). It can be described by a Mg 3 (AsO 4 ) 3 framework built up by a complex arrangement of distorted MgO 6 octahedra, and AsO 4 tetrahedra. The projection of the structure in a polyhedral representation is presented in Fig. 1. It consists of Mg1O 6 and Mg2O 6 octahedra that share edges to form staggered chains stacked parallel to the [10-1] direction. Equivalent chains are linked together through the AsO 4 tetrahedra corners. As1O 4 connects two chains and thus two of its O atoms belong of the same chain.
The As2O 4 tetrahedron shares his four oxygen summits with four different MgO 6 octahedra belonging to three adjacent chains, two belong to the same chain and the two others from two different chains. The arrangement of magnesium octahedra Mg1O 1.691Å and As2-O: 1.694 Å. This structural arrangement delimits two types of hexagonal tunnels, parallel to the c axis and located at (1/2, 0, z) and (0,0,z) respectively. Sodium Na1, Na2 and calcium Ca1, Ca2 cations are located in those channels.
Whereas the X(2) site at (0, 0, 0) contains 0.30 Na2 and 0.70 Ca2. The site in the tunnel at (0, 0, z) shifted from the X(2) site by ± 0.25 along z is occupied by Na1 and Ca1 with respectively the occupation number 0.70 and 0.30. There are a number of possible models for the cationic distribution and it's not possible to decide which is the best solution. We retain the solution with same amount of sodium and calcium. First, the occupancies of Na1 and Ca1 in the site shifted from X(2) were refined.
Second, for the site X(2), the occupancies of Na2 and Ca2 were fixed to obtain the electroneutrality. For each two cations in the same site the atomic displacement parameters were maintained the same with the instruction EADP. The bond valence sum of the Na1, Na2, Ca2, Mg1, Mg2, As1 and As2 atoms are in a good agreement with their oxidation states (Brown & Altermatt, 1985). For the calcium Ca1 which occupy partialy the tunnel the bond valence sums is different (1.33).
Experimental
Single crystals of NaCaMg 3 (AsO 4 ) 3 were prepared by a mixture of NaNO 3 , CaCO 3 , MgN 2 O 6 (H 2 O) 6 and NH 4 H 2 AsO 4 with molar ratio of (1:1:2:3). The powder was ground, then heated in a porcelain crucible progressively until 1223 K. This temperature was held for 3 days. Then the mixture was cooled slowly to room temperature at 10 K/h. The product was washed with hot water. Prismatic and colorless crystals of the title compound were extracted. Their qualitative analysis by electron microscope probe revealed that it contains sodium, calcium, oxygen, arsenic and magnesium atoms.
Data collection
Enraf-Nonius CAD-4 diffractometer R int = 0.021 Radiation source: fine-focus sealed tube θ max = 28.0º Monochromator: graphite θ min = 2.4º T = 293 (2) K h = −15→14 ω/2θ scans k = −1→16 Absorption correction: ψ scan (North et al., 1968) l = 0→8 Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 1,051.4 | 2008-05-21T00:00:00.000 | [
"Materials Science"
] |
Tuning method for phase shifters with very low first field integral errors for the European X-ray Free Electron Laser
For the long gap tunable undulator systems of the European XFEL, 91 phase shifters are needed. They need to fulfil stringent and demanding field integral tolerances if their strengths, i.e., their magnetic gaps, are changed. In order to avoid additional correctors, their first field integral errors must not exceed 0.004 Tmm for self-amplified spontaneous emission operation at 1 Å. For longer wavelengths there are slightly relaxed requirements. In addition, a good field range of 0.5 mm is required. Phase shifters are manufactured by using state of the art techniques such as measurement and sorting of magnets, measurement and sorting of subassemblies, etc. In spite of these efforts, inhomogeneities of the permanent magnet material as well as mechanic manufacturing errors, which cannot be avoided and lead to violations of the demanding first field integral specifications. Therefore, a fast and robust shimming technique was developed for the serial production of these devices. It is based on measured signatures of shims with different geometries and uses symmetry properties of shims placed on different positions and poles with different polarity. In this paper, the specifications for the phase shifters in the European XFEL are derived first. Then the method is described in detail and results are presented, which demonstrate that all requirements can be fulfilled.
I. INTRODUCTION
The European X-ray Free Electron Laser (XFEL) will provide three undulator systems for user operation called SASE1, SASE2, and SASE3 [1].Their wavelength ranges cover 0.05-5.2nm by changing the electron beam energy and/or the undulator gaps.High power radiation will be generated by using the process of self-amplified spontaneous emission (SASE).In order to reach saturation, the lengths of these systems are to 215 m.The beam energy in the European XFEL can be varied between 8.5 and 17.5 GeV.Two undulator systems, SASE1 and SASE2, are optimized to cover the hard x-ray range from 0.05 up to 0.4 nm at 17.5 GeV.They need 35 undulator segments each 5 m long with a period length of 40 mm.SASE3 is a soft x-ray FEL.It uses 21 5 m long segments with a period length of 68 mm.Its radiation wavelength range starts at 0.4 nm at 17.5 GeV at a gap of 21 mm and ends at 5.2 nm at 8.5 GeV with a fully closed gap of 10 mm.
Long undulator systems need to be segmented for simple reasons: (i) For economic manufacturing, the length of an undulator segment is limited by machine tools.A good compromise is a length of 5 m.For longer devices, the technical effort gets too high.(ii) For the electron beam passing through the undulator system, auxiliary components such as quadrupoles, correctors, beam position monitors, vacuum pumps, etc., are needed.They are placed between undulator segments in so-called intersections.At XFEL they are 1.1 m long.The resulting periodicity of 6.1 m is a good compromise with machine operation as well.
The longitudinal velocity in an undulator with a closed gap is lower than in the intersection leading to a mismatch between the photon field and the microbunched electron beam.For fixed gap undulator systems such as FLASH and LCLS [2,3], this is a static problem which can be compensated with a suitable design of the undulator end fields.For systems with variable gaps, phase shifters are needed to provide an adjustable delay to the electrons for proper matching to the phase advance.For this reason, phase shifters are needed.They are placed in the intersection as well.
A phase shifter can be realized by using a chicane consisting of three electromagnets (EMs) [4,5].Generally, for a given strength an EM phase shifter needs more space than a permanent magnet (PM) one.In addition, EMs need correction coils to exactly trim the field integrals to the required accuracy [4].In addition, there are fringe fields outside an EM phase shifter, which might interfere with other magnetic components such as quadrupoles, corrector coils, or the end fields of undulators.All these components are closely stacked in an intersection.Also, heat dissipation by electromagnetic devices is an unwanted side effect in undulator systems, which are operated in a precision temperature-stabilized environment.For the European XFEL, a permanent magnet phase shifter was developed and is described in detail in Ref. [6].Its magnetic principle is shown in Fig. 1 and briefly described.It is a modified Halbach-type hybrid structure using four identical magnet modules.Each module is surrounded by a massive zero potential iron yoke, which is static.Its yoke gap cannot be moved.This iron yoke very effectively terminates all fringe fields.For changing the strength, the magnetic gap of the magnet packages is changed inside this static yoke.Its magnetic field on axis can be described by two sinusoids with a period length of 55 mm separated by the iron yoke in the center.For demonstration, Fig. 2 shows the field and the first and the second field integrals of one of the 91 phase shifters made for the European XFEL at the minimum gap of 10.7 mm.At this gap the peak field is about 1.4 T. The shapes of the first and second field integrals outside and in the center part demonstrate that the contribution of fringe fields is very low.The total device length is only 230 mm.There is perfect magnetic symmetry: Positive and negative magnets and poles in all modules have the same dimensions and identical magnetic counterparts and surroundings.So, ideally, there should be no gap-dependent first field integral errors.However, direct fields caused by angular magnetization errors as well as inhomogeneous PM material and manufacturing errors of poles and support structures may lead to small residual gap-dependent errors.These errors may be above tolerance limits and cannot be avoided completely by the manufacturing and assembly process and therefore require compensation.
For the precision measurements of the first field integral as described in this paper, the moving wire method was used but slightly modified.The wire length is only 500 mm.A 50 strand multifilament wire was used.It is not spanned but is cast in a thin fiberglass support plate.It is found that in this way the reproducibility of measurements is enhanced, which is typically 0.0003 Tmm (0.3 Gcm).In this paper, measurements and tuning are focused on the gap dependence of first field integrals of the phase shifters.There are, of course, small static components as well, which depend on the ambient magnetic field and the geographical orientation of a device.However, they are not changing with time and therefore are no problem for operation.There are dedicated correctors in each intersection.During operation, they will be used to establish a reference orbit using beam-based alignment techniques.Therefore, in all measurements the static contribution is subtracted.It is small anyway, typically ≤AE0.02Tmm only.
In this paper, tolerance requirements for the XFEL undulator segments are derived first.Then a tuning procedure is described, which minimizes the gap dependence of first field integral errors as well as the maximum gradients in the transverse beam directions so that the specs on first field integrals in the good field region are met.It is robust and fast to apply and is used in the serial production of phase shifters for the European XFEL.Measured results are presented.
II. SPECIFICATIONS OF THE PHASE SHIFTERS FOR THE EUROPEAN XFEL
There are longitudinal and transverse errors which deteriorate FEL gain: phase jitter and beam wander.Imperfect phase shifters may contribute to both.If not adjusted well, there might be phase jitter in a long undulator system.The phase integral, which determines the phase shift at a given radiation wavelength, is controlled via the magnet gap.In Ref. [6], an adjustment accuracy of 5°was found acceptable, requiring a moderate accuracy of the phase shifter gap control of about 25 μm.Nonzero first field integrals of phase shifters steer the electron beam off the axis and decrease the overlap between the electron beam and laser field.They originate in inhomogeneities of the magnet material and geometrical imperfections.The impact of these errors on XFEL performance was studied theoretically following the method described in Ref. [7] using the FEL simulation code GENESIS 1.3 [8].
The simulations are done in the following way: 100 different rectangular distributions of random kicks of strength AE0.001, 0.005, 0.010, and 0.018 Tmm are generated at the locations of the phase shifters.For each distribution, the resulting rms beam wander and the FEL power are calculated by using GENESIS 1.3.The FEL power is normalized to the perfect, error-free configuration.Each configuration results in one data point and is plotted.It should be emphasized that this is a statistical investigation.FEL power depends on specific details of the randomly generated configuration and might differ considerably from distribution to distribution.
Figure 3(a) shows the results for the SASE1 or 2 operated at 17.5 GeV, 1 Å.It is seen that for 0.001 Tmm kicks (black squares) there is a marginal loss of FEL power.For 0.005 Tmm errors (red dots) losses up to about 10% are observed.For 0.01 Tmm (green triangles) up to 30% losses are observed, and, finally, for 0.018 Tmm errors (blue triangles) losses are up to about 70%.In order to reliably limit FEL power losses to about 10%, field integral tolerances were limited to AE0.004 Tmm to be on the safe side.Figure 3(b) shows the same analysis for SASE3 operated at 4 Å, which is the short wavelength limit of SASE3.The results suggest that in this case errors of 0.010 Tmm are just above threshold, but slightly smaller errors are permissible.A comparison with Fig. 3(a) shows that a 0.010 Tmm error at 1 Å in the worst case leads to 30% power degradation but only to 15% at 4 Å. Figure 3(c) shows the results for SASE3 operated at 16 Å.It is seen that at this wavelength higher field integral errors up to 0.018 Tmm can be tolerated without compromising on power loss.
In order to cover the whole tuning range of SASE1 or 2 with some safety margin, the phase shifter (PS) in SASE1 or 2 will not be operated at gaps below 16 mm [6].Therefore, field integral tolerances ≤0.004 Tmm are needed only for PS gaps larger than 16 mm.For SASE3, field integral tolerances can be relaxed as a function of the gap, but the PS strength is higher.For the longest wavelengths at SASE3 requiring minimum PS gaps [6], the tolerance requirement is AE0.018Tmm.For intermediate wavelengths, there are no simulations.Here tolerance requirements are obtained by linear interpolation.In order to have one standard PS at the European XFEL, these specifications are combined.
Field integral tolerance requirements must be fulfilled in a good field region of 1 mm in the horizontal plane AE0.5 mm around the device axis.This value with some safety margin is determined by alignment tolerances of AE150 μm and limits the allowed integrated gradient to below AE0.004 T. A summary of the combined specifications is given in Table I.
These are tight tolerances and are a challenge for the shimming and tuning technique.However, their benefit is that the PS strengths and gaps can be changed freely and independently from the undulator segments in the system and no PS-specific corrections are needed.This simplifies operation and facilitates special operational modes, which require retuning of phase shifters over a wide range.Two examples are just mentioned: (i) The K parameters of two adjacent undulator segments can be exactly tuned by systematically tuning the phase shifter over a large range [9].(ii) Lasing at higher harmonics can be suppressed or stimulated by retuning the phase shifters [10].Such operations will be possible without any retuning and are therefore compatible with fast changes.
III. MAGNETIC MOMENT IN SHIMS
The method of error compensation proposed in this paper relies on applying shims.Shimming is widely used for tuning insertion devices [11][12][13][14][15].A qualitative understanding of their working principle is essential.Shims are made of highly permeable material such as low carbon iron foil with a low coercive field.The typical thickness is 0.05-0.5 mm.Lateral dimensions are several to several tens of millimeters.They might be placed either directly on a pole or beside a pole on the magnet.Because of its low coercive field, the external magnetic field drives the shim in saturation and induces a magnetic moment.The resulting field of the saturated shim is similar but not identical to that of a permanent magnet with the dimensions of the shim.
Figure 4(a) shows a sketch of a hybrid-type magnet structure as used for the phase shifters.While the magnetization of the magnets is parallel or antiparallel to the beam axis, the flux created by them is redirected by the poles perpendicular to the axis.If a shim is placed on a magnet next to a pole, a magnetic moment is induced by a small fraction of the flux.Its magnetic moment vector is predominantly parallel to the beam axis, but field lines emerging from this shim are perpendicular.In contrast, in a shim on a pole the direction of the magnetic moment vector in the shim is the same as in the pole, and the effect is similar to a pole shift.
The magnetic field B ⇀ of a magnetic moment m ⇀ at position r ⇀ is given by The total first field integral of this moment is given by It is seen that only the perpendicular component m ⊥ contributes to the first field integral and in this simple model m ∥ has no effect.The induced magnetic moment in shims on poles result in m ⊥ .In contrast, shims on magnets result in m ∥ .Therefore, for the first field integral, shims on poles are more efficient.
Unfortunately, shims on poles reduce the effective phase shifter gap, are not self-adhesive by magnetic forces, and therefore need to be restrained (glued).In contrast, shims on magnets do not narrow the gap if a small overhang of poles with respect to magnets of typically 0.5 mm is provided.Fortunately, it is observed that shims on magnets still give a small strength, which is sufficient for corrections [16,17].They stay on magnets firmly and are a convenient choice for phase shifter shimming.
As sketched in Fig. 4(b), the field induced by the shim on the beam axis depends on the lateral distance d.Therefore, by changing the transverse dimensions and positions of a shim, its gap-dependent contribution to the field integral is changed, and different gap dependencies may be created with different shim geometries.
IV. SYSTEMATIC TUNING STRATEGY A. Basic assumptions
Shims have a direct effect on transverse field integrals and field integral gradients.A systematic tuning technique must determine shim geometry and placement.Two basic assumptions are made: (1) Linearity principle.-Thecontribution of any shim is proportional to its thickness.
(2) Superposition principle.-Thecontribution of a combination of several shims is equal to the sum of the contributions of the individual shims.
Full saturation of the shims is assumed.For the XFEL phase shifters, this was explicitly verified by simulations and measurements [16][17].030703-4 the same size, but placed on different positions have an according effect.Half and full magnets need to be treated separately.Accordingly, for the 32 different positions, shims on half magnets and shims on full magnets need to be distinguished.In each group the function of the shim follows the specific symmetry properties.Without loss of generality, position 1 in Fig. 5 is defined to represent the "original" O position for half magnets and position 3 as the O position for full magnets.
B. Geometry
Positions opposite to the original positions with respect to the x axis such as 2, 4 are named "mirror" positions M. Positions on poles with opposite sign are named as "reversed" positions R.There are positions which are both opposite to the original position and on the reversed poles.They are named MR for "mirror reversed."The field integral induced by a shim on an O position as a function of the transverse position x is described by a function fðxÞ, where the beam is at x ¼ 0.Here fðxÞ can stand for the horizontal or vertical field integral but in both cases are, of course, different functions.The contributions by shims on the other symmetry positions can be described by using fðxÞ and applying the according symmetry operation: For M shims it goes like fð−xÞ, for R shims like −fðxÞ, and for MR shims like −fð−xÞ.These relationships determine important geometric properties for the combination of shims on different positions in terms of both gap-dependent field integrals and transverse gradients.Table II summarizes the resulting combined functions of two shims at different symmetry positions."Integral" stands for the first field integral along phase shifter axis z, and "gradient" means the transverse gradient along the axis x.Two things need to be emphasized: (i) The symmetry operations apply only to shims which have identical size.(ii) Since only total field integrals are needed, the z position of a shim plays no role, and shims may be distributed over different locations, i.e., on different sides of a pole and/or on poles with different polarity.
There are four different cases, which need to be distinguished: shims on half and full magnets and their effects on a vertical or horizontal field.For each of these four cases, there are O, M, R, and MR positions.Note that these positions as shown in Fig. 5 differ for these four cases.Table III gives an overview.030703-5 The symmetry relations described in Tables II and III are the basis for the shimming technique.As shown in Table II, combinations of shims may be found, which independently correct field integral or gradient errors in one direction without changing the other.
In order to demonstrate the strategy by using symmetry groups of O, M, R, and MR in Table III, the impact of a shim of dimension 36 × 8 × 0.4 mm placed on a full magnet was measured experimentally with the moving wire technique.
Following Table III, eight different positions 3, 4, 5, 6, 11, 12, 13, and 14 out of the 16 allowed for the vertical field integrals of full magnet were selected for demonstration.Figure 6(a) shows the original data for these eight measurements.The sign and symmetry depend on the symmetry group.Two measurements were made on each symmetry position O at 3 and 11, M at 4 and 12, R at 5 and 13, and MR at 6 and 14.In Fig. 6 Tables II and III.The overlap of these curves is quite good, demonstrating that the assumptions on symmetry hold fairly well.The slight differences are attributed to errors in the lateral dimensions and positioning of the shims.
V. FIELD INTEGRAL TUNING
For the large scale production of the phase shifters, a fast tuning method was developed.It is based on (a) measured signatures of a selection of five types of shims with different geometries on half and full magnets, (b) the consequent use of the symmetry properties described in Tables II and III, (c) the application of the linearity and superposition principle, and, finally, (d) the numerical optimization using the superposition of combinations of shims and their signatures by a systematic trial and error method with a subsequent evaluation of field integrals and gradients.Discrete steps for the thickness of the shims of 50, 100, 200, 300, 400, and 500 μm were selected.
A. Signatures of shims
For a numerical optimization, the dependence of the onaxis horizontal and vertical field integrals and gradients on the phase shifter gap needs to be known for different shim geometries and for full and half magnets.These dependencies are called "gap signatures."Many different shim geometries were investigated.Five shims with different dimensions and positions were finally selected for further work.The criterion is that their gap signatures were quite different.Figure 7(a) illustrates the gap signature of the vertical field integral for these five shims.Figure 7(b) shows the dependency of the vertical field integrals on x at a phase shifter gap of 10.5 mm.This is the worst case, since at this minimum gap field integrals and gradients are highest.The gradient for the optimization is evaluated by the slope at x ¼ 0. Figures 7(c a large reduction, especially at small gaps, is obtained.The final status fits into the specs window given by Table I.Similarly, the tuned gradient is low enough that there is a good field area of AE0.5 mm around the axis.This is explicitly shown in Fig. 8(b) by the dependence on x at the minimum gap before and after tuning.Figures 8(c) and 8(d) show the results for the horizontal field integrals.
The results are quite similar to the vertical case.
VI. SUMMARY AND OUTLOOK
The subject of this paper is a compensation method for residual gap-dependent first field integral errors of the phase shifters for the European XFEL.It can be applied right after mechanical assembly.Unavoidable production errors due to magnet material imperfections and limited mechanical accuracy, etc., can be corrected.A systematic method is described and demonstrated which reduces the gap dependence of these errors below the specifications required for the European XFEL.It is based on measured signatures of a set of reference shims using the linearity and superposition principle.A shim configuration, which minimizes field integral errors and the transverse gradient, is found by numerical simulations of a large number of different shims by using a trial and error method.Shim configurations which bring the first field integrals and gradients in full compliance with European XFEL specifications or even below are quickly found.
The shimming method described in this paper was developed for the serial production of these devices for the European XFEL.It combines both fine-tuning to very low gap-dependent field integral errors and fast fabrication speed resulting in completely "passive" phase shifters.This means that the phase shifter gaps can be freely changed without the need for additional corrections.The operation of large undulator systems like those for the European XFEL is significantly simplified where up to 35 phase shifters need to be operated synchronously together with the undulator segments.A change of wavelength can thus be made faster.As an ultimate objective, the wavelength of an undulator system might be changed "on the fly," allowing for scanning the wavelength while the system continues lasing.
Furthermore, "passive" phase shifters facilitate special XFEL operation modes, which require substantial detuning of the phase shifters [9,10] since there are no steering errors, which may falsify results.
Shimming is commonly used to improve field quality of insertion devices [11][12][13][14][15].The method described in this paper enhances the application spectrum of shims.Three examples are just mentioned: (i) Similar to phase shifters, gap-dependent steering errors of an undulator can be compensated completely with high accuracy so that it is passive and no further corrections are needed.(ii) In long undulators, sometimes gap-dependent steering errors well inside the structure are observed, which deteriorate the trajectory out of
FIG. 2 .
FIG.2.Field and first and second field integrals of one of the 91 phase shifters of the European XFEL.
FIG. 3 .
FIG. 3. GENESIS 1.3 simulations at different wavelengths with a 17.5 GeV beam.The normalized power is plotted against rms beam wander caused by different first field integral kicks on the phase shifter positions.(a) SASE1 or 2 at 0.1 nm; (b) SASE3 at 0.4 nm; (c) SASE3 at 1.6 nm.
Figure 5
Figure 5 illustrates the magnet configuration of a phase shifter in a perspective view.Only the magnet modules without the iron yoke shown in Fig. 1 are sketched: A phase shifter is comprised of four magnet modules.Each contains one full magnet, two poles, and two half magnets.In Fig. 5, 32 different positions are defined where shims might be placed.Symmetry requires that identical shims, i.e., with
FIG. 5 .
FIG. 5. Illustration of the four modules of a phase shifter and the definition of the 32 different shim positions.
FIG. 6. Measurements of the total first vertical field integral as a function of x with a shim of 36 × 8 × 0.4 mm placed on O, M, R, and MR positions as indicated.(a) Measured data.(b) Data transformed back by using the symmetry properties of TableIII.
FIG.8.Demonstration of the tuning of the vertical field integrals.Left: Open symbols, status as found; full symbols, field integral after tuning.Red: On-axis measurement; blue, black measurement at x ¼ AE0.5 mm.All curves fit inside the specs window as given by TableI, which is indicated for comparison.Right: Gradient optimization is demonstrated by the slope of the x dependence at the 10.5 mm gap before and after tuning.
TABLE I .
Combined specifications of the phase shifters for SASE1-3.
TABLE II .
Combination of shims and resulting effects.
TABLE III .
Magnet positions with O, M, R, and MR symmetry for horizontal and vertical field integrals and half and full magnets. | 5,644.8 | 2015-03-06T00:00:00.000 | [
"Physics"
] |
Protective Effects of Glucose-Related Protein 78 and 94 on Cisplatin-Mediated Ototoxicity
Cisplatin is a widely used chemotherapeutic drug for treating various solid tumors. Ototoxicity is a major dose-limiting side effect of cisplatin, which causes progressive and irreversible sensorineural hearing loss. Here, we examined the protective effects of glucose-related protein (GRP) 78 and 94, also identified as endoplasmic reticulum (ER) chaperone proteins, on cisplatin-induced ototoxicity. Treating murine auditory cells (HEI-OC1) with 25 μM cisplatin for 24 h increased cell death resulting from excessive intracellular reactive oxygen species (ROS) accumulation and caspase-involved apoptotic signaling pathway activation with subsequent DNA fragmentation. GRP78 and GRP94 expression was increased in cells treated with 3 nM thapsigargin or 0.1 μg/mL tunicamycin for 24 h, referred to as mild ER stress condition. This condition, prior to cisplatin exposure, attenuated cisplatin-induced ototoxicity. The involvement of GRP78 and GRP94 induction was demonstrated by the knockdown of GRP78 or GRP94 expression using small interfering RNAs, which abolished the protective effect of mild ER stress condition on cisplatin-induced cytotoxicity. These results indicated that GRP78 and GRP94 induction plays a protective role in remediating cisplatin-ototoxicity.
Introduction
Hearing loss, also known as hearing impairment, is generally classified into conductive and sensorineural hearing loss. The latter is caused by several risk factors, including acoustic trauma, aging, ototoxic drug use, autoimmune disease, infection, and genetic disorders. Hearing loss is commonly associated with the loss of auditory hair cells in the cochlea, which is irreversible due to its inability to regenerate irreparable hair cell damage [1]. In particular, various commonly used drugs have ototoxic properties that damage the cochlea or auditory nerve and vestibular system and are referred to as drug-induced hearing loss (DIHL). The ototoxic side effects of drugs, such as salicylates, aminoglycosides, and cisplatin, are bilaterally symmetric or asymmetric, with one ear being affected later. DIHL may arise during or after the end of therapy and may be occasionally recoverable if the drug is immediately discontinued or if the initial damage is allowed to repair. However, further accumulation of ototoxic medication may lead to permanent destruction of the sensory hair cells and, concomitantly, permanent hearing loss [2].
Cell Culture
The HEI-OC1 cell line was kindly provided by Dr. Federico Kalinec (Dept. of Cell and Molecular Biology, House Ear Institute, Los Angeles, CA, USA). The HEI-OC1 cells were cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM) with 10% fetal bovine serum (FBS) at 33 • C in 10% humidified CO 2 atmosphere.
Cytotoxicity Assay
Cell viability was evaluated using a colorimetric D-Plus™ CCK cell viability assay kit (Dongin LS, Seoul, Korea), according to the manufacturer's instructions. The cells were seeded on 96-well plates at a density of 4 × 10 3 cells/well and grown for 24 h under standard conditions. These cells were exposed to different concentrations of cisplatin, TG, and TM for 24 h. For inducing GRP expression, the cells were pretreated with 3 nM of TG or 0.1 µg/mL of TM for 24 h, followed by the treatment of 25 µM of cisplatin for 24 h. The amount of formazan dye generated was determined by measuring the absorbance at 450 nm using a microplate spectrophotometer (Molecular Devices Corp., Sunnyvale, CA, USA). The absorbance values were converted to percentages for comparison with untreated controls.
Immunoblotting
The cells were washed with ice-cold PBS and lysed with RIPA buffer (Sigma-Aldrich) supplemented with complete protease inhibitor cocktail on ice for 30 min. The supernatants were collected by centrifugation at 13,000× g for 20 min, and protein concentrations were determined using a BCA Protein Assay kit (Thermo Fisher Scientific, Waltham, MA, USA). Total soluble proteins (10-30 µg) were separated on 12% sodium dodecyl sulfate polyacrylamide gel and transferred to nitrocellulose membranes (GE Healthcare Biosciences, Uppsala, Sweden). The membranes were blocked using 5% skim milk in TBS-T (10 mM Tris-HCl, pH 7.4, 100 mM NaCl, and 0.1% Tween 20) for 1 h at room temperature. The membranes were probed with their corresponding primary antibodies, followed by the appropriate HRP-conjugated secondary antibodies. Then, immunoreactive bands were detected using enhanced chemiluminescence assay technique (ECL; Dongin LS) and quantified using the ImageQuant LAS 500 biomolecular imager (GE Healthcare Biosciences).
Measurement of Intracellular ROS Production
Intracellular ROS level was measured using a fluorescent dye, 5-(and-6)-chloromethyl-2 ,7 -dichlorodihydrofluorescein diacetate acetyl ester (CM-H 2 DCFDA; Molecular Probes, Inc., Eugene, OR, USA). The cells grown on 96-well plates were pre-incubated with 3 nM of TG or 0.1 µg/mL of TM for 24 h and then treated with 25 µM of cisplatin for another 24 h. The cells were then washed twice with Hank's balanced salt solution (HBSS) and incubated with 5 µM CM-H 2 DCFDA for 20 min at 33 • C in the dark. After washing twice with HBSS, the samples were immediately observed at 485 nm excitation and 535 nm emission using a PerkinElmer VICTOR 3 luminescence spectrometer (Perkin-Elmer, Waltham, MA, USA).
Detection of Apoptosis Using TUNEL Assay
Apoptosis was detected using both LIVE/DEAD Viability/Cytotoxicity Kit (Molecular Probes, Inc., Eugene, OR, USA) and In Situ Cell Death Detection Kit, TMR red (Roche Diagnostics, Indianapolis, IN, Antioxidants 2020, 9, 686 4 of 13 USA), according to the manufacturer's instructions, with slight modifications. The cells grown on the glass coverslip in 6-well culture dishes were pretreated with 3 nM TG or 0.1 µg/mL TM for 24 h and further incubated with 25 µM of cisplatin for 24 h. Live cells were labeled with calcein AM, briefly washed with PBS, and fixed with 4% paraformaldehyde. Then, the cells were permeabilized with 0.1% Triton X-100 in 0.1% sodium citrate for 5 min and incubated with the TUNEL reaction mixture containing terminal deoxynucleotidyl transferase and tetramethyl-rhodamine-dUTP. The cells were examined using the appropriate filter of an Olympus IX71 fluorescence microscope, green fluorescence (ex/em ≈495/≈515 nm) for live cells, and red fluorescence (ex/em ≈495/≈635 nm) for apoptotic cells. The percentage of TUNEL-positive cells was determined by counting ≈1000 cells selected from 3-4 randomly chosen fields of the cover slip.
Transfection with siRNA
The siRNAs of GRP78, GRP94, and scrambled oligonucleotide, as a negative control, were obtained from Genolution Pharmaceuticals, Inc. (Seoul, Korea). The cDNA sequences of GRP78 (GenBank accession number; NM_001163434.1) and GRP94 (GenBank accession number; NM_011631.1) to design the respective siRNAs were as follows: 5 -GAAU GAAUUGGAAAGCUAUUU-3 for GRP78 and 5 -CUGGAAAUGAGGAGUUAACUU-3 for GRP94. For the scrambled siRNA, it was 5 -CCUCGUGCCGUUCCAUCAGGUAGUU-3 . The cells were seeded on 24-well culture plate and transiently transfected with each siRNAs (60 nM) using G-fectin (Genolution Pharmaceuticals, Inc., Seoul, Korea), according to the manufacturer's instructions. Each transfection procedure was performed in quadruplicate. After 24 h, the transfection mixture was replaced with fresh culture medium and further incubated for 2 d. Each transfectant was treated with TG (3 nM) or TM (0.1 µg/mL) for 24 h and incubation with 25 µM of cisplatin for another 24 h. Cell viability and ROS accumulation level were evaluated as described in the previous text.
Statistical Analysis
Data were expressed as means ± standard error (SE) of three independent experiments. Differences between groups were evaluated using Student s t-test or one-way analysis of variance (ANOVA), as appropriate. A p value of <0.05 was considered statistically significant.
Cisplatin-Induced Apoptosis in HEI-OC1 Cells
The HEI-OC1 cells were exposed to different concentrations of cisplatin (5-100 µM) for 24 h to determine the adequate cytotoxic cisplatin concentration, and cell viability was monitored using CCK assay. The cisplatin treatment decreased cell viability in a dose-dependent manner, with a lagging dose between 10 and 15 µM. At 25 µM cisplatin concentration, cell viability was 44.8%, compared with that of the untreated control ( Figure 1A). As a result, 25 µM cisplatin concentration was used in our subsequent studies, since this concentration and timepoint were within the range of an estimated half-maximal cytotoxic dose (IC 50 ).
It is well established that cisplatin-induced cytotoxicity is closely associated with excessive generation of ROS and activation of apoptosis-related proteins [18,19]. Therefore, we initially measured the levels of cisplatin-induced intracellular ROS using a peroxide-sensitive fluorescent probe, CMH 2 DCFDA. As shown in Figure 1B, DCF fluorescence intensity from the cisplatin-treated cells was five-fold higher than that of the untreated control. Next, we evaluated the changes in expression levels of proteins involved in apoptotic pathways to investigate whether cisplatin-induced cytotoxicity was associated with apoptosis. Immunoblot analyses showed that the expression levels of two mitochondrial proteins, namely Bcl-2 (anti-apoptotic protein) and Bax (pro-apoptotic protein), were contrasting in cisplatin-treated cells, that is, the ratio of Bcl-2/Bax was 1.0 in the untreated control versus 0.24 in cisplatin-treated cells. Moreover, catalytically activated forms (cleaved) of caspase-3 and caspase-7 had a six-fold increase in cisplatin-treated cells, with concomitant cleaved (inactivated) PARP fragment accumulation ( Figure 1C). Taken together, these results indicated that excessive ROS accumulation and apoptosis contributed to cisplatin-mediated ototoxicity in HEI-OC1 cells. It is well established that cisplatin-induced cytotoxicity is closely associated with excessive generation of ROS and activation of apoptosis-related proteins [18,19]. Therefore, we initially measured the levels of cisplatin-induced intracellular ROS using a peroxide-sensitive fluorescent probe, CMH2DCFDA. As shown in Figure 1B, DCF fluorescence intensity from the cisplatin-treated cells was five-fold higher than that of the untreated control. Next, we evaluated the changes in expression levels of proteins involved in apoptotic pathways to investigate whether cisplatin-induced cytotoxicity was associated with apoptosis. Immunoblot analyses showed that the expression levels of two mitochondrial proteins, namely Bcl-2 (anti-apoptotic protein) and Bax (pro-apoptotic protein), were contrasting in cisplatin-treated cells, that is, the ratio of Bcl-2/Bax was 1.0 in the untreated control versus 0.24 in cisplatin-treated cells. Moreover, catalytically activated forms (cleaved) of caspase-3 and caspase-7 had a six-fold increase in cisplatin-treated cells, with concomitant cleaved (inactivated) PARP fragment accumulation ( Figure 1C). Taken together, these results indicated that excessive ROS accumulation and apoptosis contributed to cisplatin-mediated ototoxicity in HEI-OC1 cells. Protein bands were quantified using densitometry, and their abundances were expressed relative to β-actin band density. The ratio of each protein to β-actin is presented as a fold change of that of the untreated control. Values are expressed as means ± SE of three independent experiments. * p < 0.05, ** p < 0.01, *** p < 0.001; compared with the untreated control.
Effects of ER Stress Inducers on GRP78 and GRP94 Expressions in HEI-OC1 Cells
The induction of GRP78 and GRP94 expression during ER stress are reported to function in maintaining ER homeostasis, assisting in proper protein folding, and degrading misfolded proteins through chaperone formation [20,21]. The involvement of GRPs in cell survival prompted us to examine the protective roles of GRP78 and GRP94 in cisplatin-mediated ototoxicity. Cell viability was evaluated in TG-or TM-treated cells at various concentrations. Moreover, the 24-h exposure revealed that both inducers' cytotoxicity was increased dose-dependently. At 3 or 5 nM TG concentration, cell viability was decreased by 92.6% or 90.1%, whereas at 0.05 or 0.1 µg/mL TM concentration, cell viability was 91.2% or 89.2% (Figure 2A,B), indicating that the cytotoxic effects of TG or TM is relatively mild at these concentrations. Treatment with 3 or 5 nM of TG induced significant increases in GRP78 and GRP94 expressions; that is, three-fold and six-fold for GRP78 and GRP94 at both concentrations, respectively. Treatment with 0.05 µg/mL TM resulted in two-and-a-half-fold increase in GRP78 expression, but not in that of GRP94. The expressions of both proteins were significantly increased at 0.1 µg/mL ( Figure 2C). Therefore, the aforementioned concentration of TG or TM (3 nM or 0.1 µg/mL, respectively) was used to examine the protective effects of GRP78 and GRP94 on cisplatin-induced ototoxicity.
examine the protective roles of GRP78 and GRP94 in cisplatin-mediated ototoxicity. Cell viability was evaluated in TG-or TM-treated cells at various concentrations. Moreover, the 24-h exposure revealed that both inducers' cytotoxicity was increased dose-dependently. At 3 or 5 nM TG concentration, cell viability was decreased by 92.6% or 90.1%, whereas at 0.05 or 0.1 μg/mL TM concentration, cell viability was 91.2% or 89.2% (Figure 2A,B), indicating that the cytotoxic effects of TG or TM is relatively mild at these concentrations. Treatment with 3 or 5 nM of TG induced significant increases in GRP78 and GRP94 expressions; that is, three-fold and six-fold for GRP78 and GRP94 at both concentrations, respectively. Treatment with 0.05 μg/mL TM resulted in two-and-a-half-fold increase in GRP78 expression, but not in that of GRP94. The expressions of both proteins were significantly increased at 0.1 μg/mL ( Figure 2C). Therefore, the aforementioned concentration of TG or TM (3 nM or 0.1 μg/mL, respectively) was used to examine the protective effects of GRP78 and GRP94 on cisplatin-induced ototoxicity.
Protection of GRP78 and GRP94 Induction from Cisplatin-Mediated Ototoxicity
To further examine whether the upregulation of GRP78 and GRP94 attenuated cisplatin-induced cytotoxicity, these proteins were induced by pre-incubating HEI-OC1 cells with 3 nM of TG or 0.1 μg/mL of TM for 24 h, and then exposed to 25 μM of cisplatin for another 24 h. As shown Figure 3A, the CCK assay showed that pretreatment with TM or TG increased cell viability by 29.4% or 27.8% more than that of cisplatin alone. We evaluated the level changes of
Protection of GRP78 and GRP94 Induction from Cisplatin-Mediated Ototoxicity
To further examine whether the upregulation of GRP78 and GRP94 attenuated cisplatin-induced cytotoxicity, these proteins were induced by pre-incubating HEI-OC1 cells with 3 nM of TG or 0.1 µg/mL of TM for 24 h, and then exposed to 25 µM of cisplatin for another 24 h. As shown Figure 3A, the CCK assay showed that pretreatment with TM or TG increased cell viability by 29.4% or 27.8% more than that of cisplatin alone. We evaluated the level changes of cisplatin-induced intracellular ROS generation in cells pretreated TM or TG. Cisplatin-triggered ROS accumulation was decreased by 2.9 or 2.2 times, respectively, in cells pretreated with TM or TG, as determined by DCF fluorescence intensity analysis ( Figure 3B). When the cells were treated with TM or TG for 24 h, the ROS levels were slightly increased, but the values were significantly lower than that of cisplatin treatment alone (data not shown). This result indicated that GRP induction attenuated cisplatin-triggered intracellular ROS accumulation. Next, we investigated changes in the levels of protein expressions involved in apoptotic pathways, using immunoblot analysis ( Figure 3C). Cisplatin treatment did not cause any changes in GRP78 and GRP94 expression by themselves. The ratio of Bcl-2/Bax was 0.13 in cisplatin-treated cells, whereas this ratio was elevated in cells pretreated TM or TG (0.6 or 0.67). In addition, the augmented activation of caspase-3 and caspase-7, as well as cisplatin-induced PARP inactivation dramatically declined in cells pretreated with TM or TG. Finally, the protective effects of GRP overexpression on cisplatin-induced apoptosis was confirmed through calcein AM staining (green) of viable cells, following the detection of DNA fragmentation using the TUNEL assay method (red). This allowed the researchers to calculate the percentage of apoptotic cells over viable cells. As shown in Figure 4, the percentage of TUNEL-positive Antioxidants 2020, 9, 686 8 of 13 cells was 35% in cisplatin-treated cells, whereas pretreatment of TM or TG resulted in a dramatic reduction of this percentage by 8% or 13%, respectively. Taken together, these results indicate that GRP pre-induction inhibits cisplatin-mediated apoptotic events in HEI-OC1 cells, such as oxidative stress, caspase-dependent pathway activation, and dysregulation of apoptosis-regulating mitochondrial proteins.
Antioxidants 2020, 9, x FOR PEER REVIEW 9 of 15 Finally, the protective effects of GRP overexpression on cisplatin-induced apoptosis was confirmed through calcein AM staining (green) of viable cells, following the detection of DNA fragmentation using the TUNEL assay method (red). This allowed the researchers to calculate the percentage of apoptotic cells over viable cells. As shown in Figure 4, the percentage of TUNEL-positive cells was 35% in cisplatin-treated cells, whereas pretreatment of TM or TG resulted in a dramatic reduction of this percentage by 8% or 13%, respectively. Taken together, these results indicate that GRP pre-induction inhibits cisplatin-mediated apoptotic events in HEI-OC1 cells, such as oxidative stress, caspase-dependent pathway activation, and dysregulation of apoptosis-regulating mitochondrial proteins.
Effect of GRP78 or GRP94 Knockdown (KD) on Cisplatin-Mediated Ototoxicity
To further validate the protective roles of GRP78 and GRP94 against cisplatin-induced ototoxicity, the pre-induction of GRP78 and GRP94 in TM-or TG-treated cells was inhibited through small interfering (si) RNA transfection, and then the changes in cisplatin-mediated cytotoxicity and ROS accumulation were evaluated. After 72 h of transfection, GRP78 and GRP94 expression levels in both KD transfectants were markedly decreased 0.4 times, compared with that in the scrambled siRNA transfectant, as per the results of the immunoblot analysis. The slight reduction of GRP94 or GRP78 expression observed in GRP78 or GRP94 KD cells was not statistically significant ( Figure 5A). Cisplatin-induced cytotoxicity was not changed in either GRP78 or GRP94 KD transfectant, whereas the rescue effect of the TG or TM pretreatment on cell viability was markedly decreased by 30%, compared with that of the scrambled siRNA transfectant ( Figure 5B). Concomitantly, each pretreatment further increased cisplatin-triggered ROS accumulation in KD cells ( Figure 5C). These results demonstrated that GRP overexpression plays a crucial role in attenuating cisplatin-mediated ototoxicity.
Effect of GRP78 or GRP94 Knockdown (KD) on Cisplatin-Mediated Ototoxicity
To further validate the protective roles of GRP78 and GRP94 against cisplatin-induced ototoxicity, the pre-induction of GRP78 and GRP94 in TM-or TG-treated cells was inhibited through small interfering (si) RNA transfection, and then the changes in cisplatin-mediated cytotoxicity and ROS accumulation were evaluated. After 72 h of transfection, GRP78 and GRP94 expression levels in both KD transfectants were markedly decreased 0.4 times, compared with that in the scrambled siRNA transfectant, as per the results of the immunoblot analysis. The slight reduction of GRP94 or GRP78 expression observed in GRP78 or GRP94 KD cells was not statistically significant ( Figure 5A). Cisplatin-induced cytotoxicity was not changed in either GRP78 or GRP94 KD transfectant, whereas the rescue effect of the TG or TM pretreatment on cell viability was markedly decreased by 30%, compared with that of the scrambled siRNA transfectant ( Figure 5B). Concomitantly, each pretreatment further increased cisplatin-triggered ROS accumulation in KD cells ( Figure 5C). These results demonstrated that GRP overexpression plays a crucial role in attenuating cisplatin-mediated ototoxicity. Figure 5. Cisplatin-mediated cytotoxicity and intracellular ROS accumulation in GRP78 or GRP94 KD cells. Cells were transfected with GRP78, GRP94, or scrambled siRNAs and then treated with cisplatin as described in Section 2. (A) GRP78 and GRP94 expression levels after the 72-h transfection. Protein bands were quantified densitometrically and normalized to the density of the β-actin band. The ratio of GRP78 or GRP94 to β-actin in each group was presented as its fold-change relative to the scrambled siRNA transfectant. * p < 0.05, compared with the scrambled siRNA transfectant. After the 48-h incubation, cells were treated with TM or TG, and then cisplatin, as previously described. (B) Cell viability was determined through CCK assay. The graph represents the relative viability percentage, compared with the untreated control. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. (C) The levels of ROS accumulation were determined through DCF fluorescence intensity spectrofluorometry. The graph represents the relative ROS accumulation fold, compared with untreated controls. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. Protein bands were quantified densitometrically and normalized to the density of the β-actin band. The ratio of GRP78 or GRP94 to β-actin in each group was presented as its fold-change relative to the scrambled siRNA transfectant. * p < 0.05, compared with the scrambled siRNA transfectant. After the 48-h incubation, cells were treated with TM or TG, and then cisplatin, as previously described. (B) Cell viability was determined through CCK assay. The graph represents the relative viability percentage, compared with the untreated control. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin. (C) The levels of ROS accumulation were determined through DCF fluorescence intensity spectrofluorometry. The graph represents the relative ROS accumulation fold, compared with untreated controls. Values are expressed as means ± SE of three independent experiments. p < 0.05, *; compared with the untreated control, # ; cisplatin-only versus TM plus cisplatin or TG plus cisplatin.
Discussion
Excessive free radical formation in the cochlea caused by aging, noise exposure, and ototoxic compounds results in sensory hair cell injury, which subsequently leads to hearing loss. Potential free radical generators in the ear include mitochondria, enzymatic reactions, NOX3, and increased intracellular calcium concentration that leads to overproduction of neurotransmitters, such as nitric oxide (NO) and glutamate [2,22,23]. In this respect, maintaining redox homeostasis is crucial in protecting the cochlea and central auditory system against oxidative stress-mediated acoustic trauma. In the present study, we found that pre-induction of GRP78 and GRP94 attenuated the cisplatin-induced ROS accumulation, which protected the HEI-OC1 cells from oxidative injury.
Cisplatin is an effective, widely used anticancer drug; however, its major side effect is ototoxicity with subsequent sensorineuronal hearing loss after high-dose treatment. Cisplatin ototoxicity is known to be associated with at least two mechanisms; DNA adduct formation and ROS accumulation in both the cochlea and the vestibular system, leading to the death of sensory cells through apoptosis or necrosis [24]. For example, cisplatin was found to induce apoptosis HEI-OC1 cells and the neonatal rat organ of Corti explants, which was mediated by ROS generation and lipid peroxidation [25]. Intraperitoneal cisplatin evoked a hearing threshold shift and an intrinsic apoptotic pathway within rat cochleae, which involved the activation of caspase-3 and caspase-7, and modulation of two mitochondrial protein expressions (increased Bax and decreased Bcl-2 levels) [26]. It has been also reported that cisplatin ototoxicity in HEI-OC1 cells is mainly associated with the mitochondrial apoptotic pathway through the activation of ROS/JNK signaling cascade [27]. Consistent with these findings, the present study showed that intracellular ROS accumulation and intrinsic apoptotic pathways mainly contributed to cisplatin-induced cell death in HEI-OC1 cells (Figure 1). Activation of caspase-3 and caspase-7, inactivation of PARP, and altered expression of Bcl-2/Bax were found to be associated with increased levels of DNA fragmentation (Figure 4).
GRP78 and GRP94 induction ensures proper protein folding in the ER, thereby protecting cells from ER dysfunction caused by nutrient deprivation, chemical toxicity, changes in calcium mobilization, oxidative stress, or glycosylation disturbances [28,29]. These were induced by treating the cells with a specific inhibitor of ER Ca 2+ -ATPase (TG) or an N-linked glycosylation inhibitor (TM), which disrupts ER calcium homeostasis or prevents post-translational protein maturation, respectively. The protective mechanisms of GRPs are involved in suppressing intracellular ROS accumulation and stabilizing mitochondrial function [11,30]. In the present study, a dose-dependent cytotoxicity was observed in HEI-OC1 cells exposed to different concentrations of TG or TM for 24 h. At 3 nM of TG or 0.1 µg/mL of TM, cytotoxicity was relatively decreased and GRP78 and GRP94 expression levels were significantly increased ( Figure 2). Moreover, there was no obvious change in intracellular ROS accumulation and apoptosis at the same concentrations (data not shown). Similar ranges of TG or TM concentration and a specific timepoint were used to induce ER stress proteins without serious toxicity in various cell lines [13,14,31]. However, prolonged exposure resulted in decreased GRP78 and GRP94 levels, leading to a consequent loss of cell viability. Taken together, these findings suggest that GRP78 and GRP94 may be induced in cells prior to more severe cytotoxic development.
ER stress exposure resulted in either activation of protective ER stress responses or ER-associated apoptotic pathways, which occur due to ER stress beyond the capacity of the UPR system. The protective response restored cellular homeostasis and adaptive reactions that potentiate protective abilities against a later and more injurious stress. The beneficial effect of mild ER stress, assimilated as ER stress preconditioning, has been reported in the liver or brain of TM-injected rats, which were protected from later hepatic ischemia/reperfusion injury or lipopolysaccharide-induced neuroinflammation and memory impairment, respectively [32,33]. Additionally, ER stress preconditioning in cultured cells pretreated with TG alleviated toxicant-mediated cell damage through the upregulation of ER stress-related proteins, including GRP78 and GRP94 [14,34]. In the present study, pre-incubation of HEI-OC1 cells with TG or TM prior to adding cisplatin induced GRP78 and GRP94 expression, attenuated intracellular ROS accumulation, and inhibited the caspase-dependent apoptotic pathway, resulting in increased cell viability (Figure 3), which were correlated with an amelioration of TUNEL-positive cells (Figure 4). These results define a novel mechanism wherein mild ER stress may be beneficial for auditory cells in defending against the ototoxic side effect of cisplatin, as it can alleviate cell injury, including excessive ROS accumulation and apoptosis.
GRP78 and GRP94 induction in ER stress-preconditioned cells plays cytoprotective roles in various cytotoxic conditions. For example, increased GRP78 expression during ER stress response attenuated H 2 O 2 -induced renal epithelial cell injury by inhibiting the increase of intracellular Ca 2+ concentration and activation of the ERK1/2 signaling pathway [35]. Tolerance to various cytotoxins was provided by GRP78 and GRP94 overexpression in several cell lines [31]. Furthermore, the co-downregulation of GRP78 and GRP94 expressions in prostate cancer cells by their specific siRNAs suppressed cell migration and promoted caspase-9-dependent apoptosis [36]. In the present study, transient transfection of HEI-OC1 cells with siRNA targeted against GRP78 or GRP94 abolished both inducers' expression levels during ER stress preconditioning and failed to reduce cisplatin-mediated ROS accumulation, thus sensitizing cells to cisplatin-induced cytotoxicity ( Figure 5). This finding indicated that GRP78 and GRP94 induction is integral for ER function to promote a protective mechanism against cisplatin-triggered ototoxicity. This is supported by recent findings that the decreased expression of ER stress-related proteins, including GRP78, in the cochleae of aged mice was associated with age-related hearing loss [16], and that intense noise exposure upregulated GRP78 expression level in hair, lateral wall, and spiral ganglion cells of guinea pigs, thereby protecting cochlear cells from noise-induced injury [17]. It has been also reported that cisplatin binds to GRP78 and GRP94 from cochlear and kidney cell lysates, suggesting that this interaction may attenuate cisplatin ototoxicity [37]. It will be of interest to examine the signaling pathways of ER stress sensors, including inositol-requiring enzyme 1 (IRE1), PKR-like endoplasmic reticulum kinase (PERK), and activating transcription factor 6 (ATF6). This may help to understand the protective mechanism of ER stress preconditioning against ototoxicity of cisplatin.
Conclusions
In summary, we showed that TG or TM pretreatment before cisplatin exposure attenuated cisplatin ototoxicity in auditory hair cells. The protective effects of these ER stress inducers were achieved through increasing GRP78 and GRP94 expressions, leading to the inhibition of intracellular ROS accumulation and activation of intrinsic apoptotic signaling pathways induced by cisplatin. Our findings add to the knowledge of the beneficial effects of ER stress preconditioning on cisplatin-induced ototoxicity and also provide new insight in designing approaches to prevent or treat environment-related hearing loss. | 6,286 | 2020-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
MAINTAINING ORDER INFLUENCES. ANCIENT INTERPRETATIONS OF ANIMAL VIOLENCE
Order occupied a central role in how the ancients understood the world. Violence against animals in the ancient world was acceptable depending, at least in part, on how they were classified into the order-disorder spectrum. Animals that were seen as threatening to order were classified as more wild and disorderly, even if the animals were seen as able to form contracts. Animals employed in some manner beneficial to humans, though, were seen as more orderly. Ultimately, if violence was necessary for the preservation of order, especially if the goal was to keep animals in a state of usefulness, it was employed. Lastly, arguments that promote kindness towards animals highlight that this kindness is, in some indirect way, to preserve order.
2 emphasis on order over chaos, with hierarchical strata enforced by laws, economics, and violence. Did this esteem for order, structure, and organization, then, influence how ancient sources viewed violence against animals?
The evidence that follows suggests that violence against animals in the ancient world was acceptable, though depending, at least in part, on how they were classified into the order-disorder spectrum. Animals that were seen as threatening to order -for example, in spreading diseases -were classified as more wild and disorderly. Animals employed in some manner beneficial to humans, like husbandry and arena games, though, were seen as more orderly. Viewing animals as rational did not seem to impact whether animals were seen as orderly or wild: even when viewed as rational and able to form contracts, ancient sources still suggest animals be employed in some kind of useful role to preserve order.
Ultimately, if violence was necessary for the preservation of order, especially if the goal was to keep animals in a state of usefulness, it was employed. Lastly, arguments that promote kindness towards animals highlight that this kindness is, in some indirect way, to preserve order. Only in sources that don't develop a context of preserving order -for example, in non-political treatises -do authors begin to consider violence against animals as unwarranted for direct, inherent moral reasons instead of an indirect argument of keeping order in the world.
Animals, despite being nonhuman, were commonly fit into the order/disorder classification in the ancient world. Greeks gave human names to hunting animals derechoanimal.info Julio 2015 3 and pet dogs, both of which were very much a part of social and family structure, and even buried dogs besides their masters in cemeteries (Lonsdale 149-150). In short, these animals could fit nicely into a well-ordered Greek world. But, dogs were also viewed as diseased and rabid, which aroused the Greeks' anxiety (Lonsdale 151), presumably because contagious diseases could throw whole cities into chaos. Even some pet dogs, in the right circumstances, could be feared. As Lonsdale says, "The fear that the dog will turn on his master, in essence become his successor, comes through strongly in stories like Priam's apocalyptic vision of the fall of Troy in the twenty-second book of the Iliad, where his table-dogs tear out his hair and rip away his genitals" (152). This fear seems to come in part from a threat to order, the overturning of the master-animal relationship. Porphyry cites another argument of a threat posed by animals: in their natural tendency to over-breed, animals could "overrun the streets" if left unchecked (Newmyer 108). Outside homes and towns, there are assertions that hunting is necessary because it provides humans "expressions of triumph at the removal of beasts intent upon obliterating the human race." (Newmyer 87 emphasis added). So, what could be done about these threats to order posed by animals, both tame and wild ones?
Aristotle claims that human beings wage a "just war against wild nature." (Politics 1256b23-26 via Newmyer 87), evidencing a longstanding tradition of using violence to preserve order against the wild and chaotic. But, instead of just obliterating threatening animals, it seems it was most expedient to organize a system in which animals were of some service to humans. As an analogy, Zeus, in branding irons, and chains were often used (Newmyer 108). All of this suggests a view that animal nature is wild and man has a role in taming it to preserve order.
Despite the wild nature often attributed to animals, some sources, like Aristotle (History of Animals 615b23-24), Aelian (Nature of Animals III 23), and Plutarch (On the Cleverness of Animals 962), saw evidence for a degree of reason in animals, like birds, who follow their leaders (all via Newmyer 85). Does whether animals are viewed as rational or irrational, then, influence where they fit on the order/wild spectrum? For example, one might expect rational animals to be more suited to order. But ultimately, this appears to not be the case. What matters is whether the animals were threatening to whatever order existed, not whether the animal was rational. In fact, rational animals could even be seen as more derechoanimal.info Julio 2015 5 threatening. Thus, Democritus argues that animals may act contrary to order in a manner that suggests an intention to do so. Consequently, humans who commit violence against such animals are justified in doing so (Newmyer 83). Violence could also be used to preserve the guardianship humans offer in return for animals' services and products, much like a covenant or contract between rational beings, as suggested by Lucretius (Newmyer 29). This is similar to how a slave owner might use violence to keep order among his slaves, for as Plutarch says, "Perfect reason, after all, is scarcely to be isolated even in human beings" and in them it is the result of much care and training (Newmyer 17). So, even when considered rational, animals could still be treated with violence to prevent any discord or chaos, which suggests that concerns about maintaining order in the world largely determined attitudes about violence toward animals.
Violence, however, only helps in certain circumstances to preserve order.
Specifically, when animals are disorderly, violence is acceptable to preserve order.
But, as Newmyer summarizes Pythagoras's view: how can sheep be subject to violence when they are "a gentle flock born to dwell among humans, bearing their nectar in full udders, creatures that provide us their wool for clothing" (87). Ovid, too, denies that humans can justly hurt tame animals that are "our partners in labor" (Newmyer 99). Because violence against these animals does not help order in any way, it is unnecessary and even damaging to order because it hurts animals that are our partners, that already help us. Thus, these sources use the same derechoanimal.info Julio 2015 6 argument, that is, preserving order, to identify circumstances in which violence was unwarranted.
The reason violence against tame animals is unacceptable, to these sources, though, is not respect for animals so much as a belief that unnecessary human violence towards animals promotes violence among humans, and this ultimately threatens order. Pythagoras, for example, argues that "humans are enjoined to refrain from cruelty to animals because kindness to them promotes kindness to human beings" (Newmyer 114). Likewise, that cruelty towards animals promotes cruelty to human beings, particularly by desensitizing the perpetrator, is explicitly expressed by Plutarch in his reference to the escalating cruelty of the Athenian Tyrants (Newmyer 88). Plutarch even goes so far as to portray cruelty against ordered and non-threatening animals as wild and chaotic itself (Newmyer 108).
These arguments are not about the animals' suffering, but instead the effects of increasing violence in human social relations. This further supports the claim that considerations about order and disorder are central in how sources view violence against animals.
The selection of works available, however, is a limiting factor in this analysis. Many of the extant works from the ancient world are concerned with politics, and, in general, anthropocentric. This can bias our interpretation. Aristotle, for example, denies intellectual and moral capacities to animals in his more anthropocentric works, such as Politics and Nicomachean Ethics, than he attributes them in his derechoanimal.info Julio 2015 7 biological works (Newmyer 8). And, the focus of a piece, and how concerned it is with order, could influence the author's interpretation of violence against animals.
So, for example, Plutarch, in his ethical work On the Eating of Flesh, began to imply that the wanton treatment of other species is abominable (Newmyer 78). This implies that threatening animals still needed to be handled with violence, but using violence to tame useful animals, and even using animals in general, was morally unsound. As Newmyer says, "Plutarch maintains that non-human animals love their offspring as tenderly as do human parents," and so by treating them as trained, furry tools, humans violate animals' own sense of justice (16). Overall, without a context of preserving order, sources begin to interpret violence against animals differently. When the context no longer considers ultimately how order can be preserved, sources begin implicitly to describe animals as beings endowed with a sense of justice and thus deserving of it. This highlights an exception that emphasizes the rule of preserving order's influence on ancient interpretations of violence against animals.
In sum, then, the goal of preserving order guided the interpretation of violence against animals in the ancient world. As a means of taming the "wild nature" (Aristotle via Newmyer 87) of animals to secure the political, economic, or social systems of the time -in other words, to secure order -violence was acceptable.
In describing the threats posed by animals, sources often specifically recommend violence as a means of preventing discord, like hunting to curtail animal populations "intent on destroying the human race" (Newmyer 87). What was most derechoanimal.info Julio 2015 8 expedient, though, was the use of animals to contribute to order, especially the economic and social systems in roles like husbandry and religious functions.
Violence, then, was used to train and keep animals in these orderly roles. Even when sources considered animals rational and exchanging these services for guardianship, as in a contract, they still justified violence against animals to secure their functions in the economic and social system. Further, describing circumstances where violence against animals is unwarranted, ancient sources offer the argument that violence against already tame animals disrupted order in some way: either by being unnecessary damage to already helpful animals or by leading humans to become more apt to commit violence among other humans.
Only in works that have less of an emphasis on order and disorder, for example Plutarch's On the Eating of Flesh, do interpretations of violence against animals depart from the otherwise consistent preservation-of-order arguments. All of this points to the preservation of order as an important influence on the way ancient sources both justified and critiqued violence against animals.
WORKS CITED
LONSDALE, STEVEN H., Attitudes Towards Animals in Ancient Greece, in Greece and | 2,507.4 | 2015-07-01T00:00:00.000 | [
"History",
"Philosophy"
] |
Brain microstructure by multi-modal MRI: Is the whole greater than the sum of its parts?
The MRI signal is dependent upon a number of sub-voxel properties of tissue, which makes it potentially able to detect changes occurring at a scale much smaller than the image resolution. This "microstructural imaging" has become one of the main branches of quantitative MRI. Despite the exciting promise of unique insight beyond the resolution of the acquired images, its widespread application is limited by the relatively modest ability of each microstructural imaging technique to distinguish between differing microscopic substrates. This is mainly due to the fact that MRI provides a very indirect measure of the tissue properties in which we are interested. A strategy to overcome this limitation lies in the combination of more than one technique, to exploit the relative contributions of differing physiological and pathological substrates to selected MRI contrasts. This forms the basis of multi-modal MRI, a broad concept that refers to many different ways of effectively combining information from more than one MRI contrast. This paper will review a range of methods that have been proposed to maximise the output of this combination, primarily falling into one of two approaches. The first one relies on data-driven methods, exploiting multivariate analysis tools able to capture overlapping and complementary information. The second approach, which we call "model-driven", aims at combining parameters extracted by existing biophysical or signal models to obtain new parameters, which are believed to be more accurate or more specific than the original ones. This paper will attempt to provide an overview of the advantages and limitations of these two philosophies.
Introduction
Magnetic resonance imaging (MRI) has had an unprecedented contribution to our understanding of the brain, thanks to its ability to take extremely detailed pictures of this organ non-invasively. As our understanding of the MR signal increased, and hardware development allowed us to push the boundaries further and further, a range of image contrasts, each reflecting different properties of the tissue, has become available.
This has prompted a shift from qualitative to quantitative MRI that represented a true revolution in the application of MRI for research (Tofts, 2003a), particularly with the development of techniques able to detect changes occurring at the microstructural level.
Most of these techniques have proven extremely sensitive to tissue abnormalities, albeit at the price of poor specificity. The MRI signal is a very indirect measure of the tissue properties we are interested in, and despite the influence that factors such as myelin content and axonal packing have on the contrast, the variety of factors that contribute to the overall signal prevents a one-to-one association between MRI biomarkers and biological substrate. In order to overcome this intrinsic limitation of MRI, increasingly sophisticated models of signal behaviour have been developed, in an attempt to link the MRI signal to specific tissue features (such as axon diameter or permeability), e.g (Alexander et al., 2010;Coatleven et al., 2014;Kaden and Alexander, 2013). However, these applications remain associated with prohibitively long scan times and poor reproducibility.
How can we access non-invasive imaging biomarkers with improved specificity?
The answer lies in the versatility of MRI: by combining several MRI contrasts we can exploit the relative contributions of differing pathological substrates to selected MRI contrasts and substantially increase the sensitivity to specific substrates. A way of picturing this is by imagining that many tissue components have been "encoded" via different filters in each MRI technique. Multi-modal MRI is thus the way to decode them.
Multi-modal MRI is a broad concept that refers to any attempt to combine information coming from more than one MRI contrast. The possible approaches thus span from simply measuring several MRI parameters in the same individuals, to developing joint models, to using complex computational approaches to derive new measures.
In this paper we will review some examples of multi-modal imaging with the aim of identifying the advantages of this approach while highlighting at the same time the challenges and pitfalls associated with it.
The paper is organised as follows: first we will review the main components of brain tissue that we may want to characterise, and the MRI techniques that so far hold the most promise for achieving this goal. Next we will discuss the evidence that supports the complementarity of some of these techniques. Finally, we will review the most popular methods for the acquisition and analysis of multi-modal data.
What are we trying to measure?
The aim of microstructural imaging is to quantify the properties of tissue components, such as myelin, axons, dendrites, glia, and to characterise pathological features such as demyelination, inflammation, axonal loss. In other words, the ultimate goal of microstructural imaging is to be able to provide non-invasive histology. While the same principles apply to the study of white and grey matter, this paper will focus primarily on the former tissue. This is because most of the work done to date concerns the white matter, and models of the MRI signal in white matter are usually less ambiguous than those of the grey matter.
The white matter of the human brain is composed by tightly packed myelinated and non-myelinated axons and glial cells. The glial cells include oligodendrocytes, astrocytes, microglia, and oligodendrocyte progenitor cells (Walhovd et al., 2014). Pathology in the white matter thus consists mainly of demyelination, axonal degeneration and loss, and gliosis. In addition, iron, which is stored in the ferritin protein, tends to accumulate with age and neurodegenerative processes, although its concentration levels are higher in the grey matter (particularly in the basal ganglia) than in white matter (Connor et al., 1990;Hallgren and Sourander, 1958). Similar changes are induced to MRI biomarkers by each and/or a combination of these abnormalities, complicating their interpretation. Most MRI parameters tend to share some degree of variance, and disentangling these contributions is essential to understand the pathophysiology of neurological disorders and therefore to develop treatments. In addition to disease, measuring white matter changes is also relevant to understanding the mechanisms underpinning plastic changes occurring to the brain as a consequence of maturation, ageing, training and lifestyle (Kleim et al., 2002;Scholz et al., 2009;Zatorre et al., 2012).
How are we trying to measure it?
Here we will provide a brief overview of some basic concepts that may be needed to understand the following sections. While an extensive review of each technique is beyond the scope of this paper, interested readers can refer to the references provided below for more details. This list of techniques is not meant to be exhaustive: other MRI methods that offer insight into microstructure exist. Here we have included the most popular ones and also those that so far have been most consistently combined in a multi-modal fashion.
Diffusion MRI
The contrast in diffusion MRI (dMRI) arises from the interaction between the random motion of water molecules and the obstacles (constituted by membranes, organelles, cells, etc) they encounter within tissue (LeBihan, 1990). If such obstacles are not distributed uniformly, but rather form ordered "barriers" to diffusion, then diffusion becomes anisotropic. Diffusion tensor imaging (DTI) can model diffusion anisotropy, and allows a number of scalar indices to be derived, that can be used to characterise tissue microstructure (Basser et al., 1994). Fractional anisotropy (FA) has become one of the most popular MRI-derived indices in clinical studies, and it has been applied to the study of many neurological and psychiatric disorders (Pierpaoli et al., 1996). However, changes to anisotropy are difficult to interpret because different effects, such as myelin loss and axonal degeneration, could result in the same FA change (Beaulieu, 2009). A more comprehensive picture can be gained by looking at the eigenvalue changes at the same time, or at the so-called axial (AD) and radial (RD) diffusivities (Song et al., 2005). However, care must be taken, as axial and radial diffusivity may be meaningless in regions of crossing fibres (Wheeler-Kingshott and Cercignani, 2009). This is a consequence of the diffusion tensor being inadequate to describe diffusion in such a system. dMRI has evolved over the years to account for these problems, yielding more complex models, which can account for more than one fiber bundle per voxel, and even for multiple compartments. Many of these advanced models (e.g., diffusion kurtosis imaging (Jensen et al., 2005)) increase sensitivity to microstructural changes but still do not provide specific information about the nature of the detected changes. More direct measurements of specific features, such as axonal density of radius, can in principle be achieved (Alexander et al., 2010) exploiting dMRI, but they require prohibitively long acquisition times and specialised equipment.
Magnetization transfer imaging
Magnetization transfer (MT) is based on the exchange of magnetization occurring between groups of spins characterised by different molecular environment (Wolff and Balaban, 1989), namely those in free water and those bound to lipids and proteins. It is generally accepted that myelin dominates this process in the white matter. While MT imaging has been available for 3-4 decades now, the traditional approach to quantify it was based on the so-called MT ratio (MTR) (Dousset et al., 1992). MTR is the percentage difference of two images, one with off-resonance saturation (to which only macromolecular protons are sensitive) and one without. By increasing the number of acquisitions to 3, it becomes possible to separate the contributions of MT and T1, and therefore to reduce the impact of other factors, including the T1-shortening effects of iron, on image contrast (Helms et al., 2008). Analytical models of the MT-weighted signal exist (Henkelman et al., 1993), where each pool is characterised by their longitudinal relaxation rates (R A and R B ), their transverse relaxation times (T 2A and T 2B ), and their spin densities (M 0A and M 0B ). The exchange rate constant between the pools is R. Assuming that myelin is the main contributor to MT in white matter, the ratio M 0B /M 0A , known as the pool size ratio (PSR or F), is believed to reflect myelin content. Some Authors prefer to use the bound pool fraction (BPF) which is given by M 0B /(M 0B þ M 0A ). Several animal studies support this assumption (Ou et al., 2009b;Turati et al., 2015). MT is also known to be sensitive to inflammation and pH (Henkelman et al., 2001;Louie et al., 2009), and consistently it was recently suggested that MT parameters other than F might be sensitive to activated microglia or astrocytosis (Harrison et al., 2015) and metabolism (Giulietti et al., 2012). One of the limitations of MT is that typically the quantification of macromolecular protons is relative to the amount of liquid protons. This of course makes it impossible to distinguish between cases of increased water (e.g., oedema) and decreased lipid-proteins (e.g., demyelination) (Stanisz et al., 2004). In addition, macromolecules other than myelin might affect MT measurements. T1, T2, T2*, and T2' relaxometry T1 and T2 are known to be extremely sensitive to white matter microstructure (Kucharczyk et al., 1994). The properties of myelin, in particular, cause the relaxation times of the water trapped within its layers to be much shorter than those of intra and extracellular water (MacKay et al., 1994). This can be exploited in multi-component relaxometry, also known as multi-exponential T2 (MET2). The technique involves sampling the signal at several echo-times, and estimating the spectrum of T2-values (MacKay et al., 2006), with each peak corresponding to a different water component. In a departure from the original technique, the multi-component driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) approach (Deoni et al., 2003) allows whole brain myelin water fraction (MWF) maps to be obtained in under 10 min. One of the differences between mcDESPOT and MET2 is in the assumptions made about water exchange. While MET2 methods typically assume there is no exchange, such an assumption has been questioned particularly in the presence of myelin thinning. Neglecting this term might lead to underestimating the myelin water fraction, and should be considered as a limitation of these methods.
The signal in gradient echo sequences decays with a faster time constant than T2, due to the presence of local external magnetic field inhomogeneities causing an additional dephasing of the magnetization. This shorter decay time is known as T2*. T2* is related to any factor causing local susceptibility changes such as the presence of iron (Haacke et al., 2005).
R2' (¼1/T2'), defined as the difference between 1/T2* and 1/T2, should provide a more direct measure of field inhomogeneities and thus of iron (Ordidge et al., 1994). However, this parameter tends to be small in magnitude and often affected by noise, which limits its precision, and its use.
T2* contrast is also exploited in techniques such as bloodoxygenation level dependent (BOLD) contrast, arterial spin labelling, and susceptibility-weighted imaging.
Quantitative susceptibility mapping
Quantitative susceptibility mapping (QSM) is becoming increasingly popular . Magnetic susceptibility is related to iron content, myelin properties, fiber orientation and blood flow (Haacke et al., 2005), and thus QSM has great potential in the context of microstructural imaging. This technique aims at measuring quantitatively the local magnetic susceptibility independently from the sample orientation. The data are acquired typically using flow-compensated gradient echo images, with a set of parameters that depend on the tissue of interest (Haacke et al., 2015). The relevant information is contained in the phase of the MRI signal, and require careful processing in order to be extracted. The processing includes phase-unwrapping (i.e. removing the discontinuities caused by the fact that the phase is defined between -π and π) and removal of background fields caused by bulk field inhomogeneities. Once the local magnetic field is estimated, estimating the magnetic susceptibility requires the inversion of an ill-posed problem, and only recently appropriate reconstruction methods have become available (e.g., (Liu et al., 2009;Schweser et al., 2013;Shmueli et al., 2009). Thanks to these developments, and to the advent of ultra-high magnetic fields for human MRI, QSM has become feasible in vivo, and it has been applied to the study of iron distribution, demyelination and oxygen metabolism (Wang and Liu, 2015).
Proton density quantification
The proton density (PD) quantifies the amount of MR-visible protons contributing to the MRI signal and therefore it is related to the tissue water content. Water content variations are often associated with pathological processes such as oedema but also with maturation and ageing (Neeb et al., 2006). In addition, assuming that the MR-visible protons in the brain correspond to the "liquid" protons in free water, it has been suggested that the quantity (1-water content) can be used as a measure of the macromolecular content, in the form of macromolecules and lipid tissue volume or MTV (Mezer et al., 2013). The MRI signal intensity is intrinsically proportional to PD through the equilibrium magnetization M 0 ; exact quantification, however, is hampered by a number of confounding factors, including field inhomogeneities and receiver coil profile (Tofts, 2003b). Once M 0 is known, a quantitative estimation of the water content requires some kind of calibration, in order to normalize PD values to a pure water standard. Thanks to the development of accurate methods to correct for the bias, recently PD quantification has gained momentum.
H MR spectroscopy
MR spectroscopy (MRS) measures the concentration of chemical compounds (known as metabolites) that contain hydrogen ( 1 H). Other nuclei can be studied (e.g., phosphorus, sodium or fluorine), but here we will focus on 1 H MRS. The physical principle behind MRS is "chemical shift", i.e. the difference in resonant frequency between each metabolite and water. This difference depends on the 'electron cloud' (Buonocore and Maddock, 2015), a term that refers to the field produced by the electrons surrounding the nucleus. Thanks to chemical shift, a spectrum showing the peak resonant frequency of each metabolite can be obtained using MR. The peak area is estimated as a measure of relative concentration. The metabolites of greatest interest are: N-acetyl-aspartate (NAA) which is seen only in neurons and axons and is believed to reflect both density and function of nervous cells; choline (Cho), a marker of membrane turnover, typically elevated in tumours; Creatine (Cr), which is often used as a reference for quantifying other metabolites; myo-inositol (mI), a glial cell marker; and lastly Glutamate þ Glutamine (Glx), Lactate, and GABA. Absolute quantification of metabolite concentration is challenging, and thus often it is expressed as a ratio between the metabolite of interest and another one (typically Cr), which is assumed to remain stable in the condition under study. This is, however, not ideal, as changes to Cr have been observed for example in tumours (Hattingen et al., 2008;Howe et al., 2003).
Imaging using multiple modalities: overlapping or complementing information?
The possibility of measuring many different physical quantities noninvasively has an enormous potential for characterising biological changes in tissue, with the final goal of devising the appropriate combination of quantitative MRI parameters for diagnosing and monitoring neurological and psychiatric disorders. Although the most informative way of combining several MRI parameters is not immediately obvious, several examples found in the literature support the notion that some complementary information can be obtained when using more than one MRI technique. In the attempt to prove the specificity of the bound pool fraction (F) from qMT and RD to myelin, Ou et al. (2009a) used retinal ischaemia as a model of axonal damage with no demyelination in control mice and of axonal damage with demyelination in shivered mice (confirmed by immunohistochemistry). BPF and RD were significantly different between control and shivered mice, but not between injured and uninjured eyes. By contrast, AD and relative anisotropy differed significantly between injured and uninjured eyes, but not between mouse strains. These data suggest that MT is selectively sensitive to demyelination, while dMRI could potentially be sensitive to both demyelination and axonal damage, although the well-known limitations of RD and AD must be taken into account (Wheeler-Kingshott and Cercignani, 2009). When qMT and dMRI were combined to characterise damage along the cortico-spinal tract of patients with benign MS, it was consistently shown that MT-derived PSR was significantly different from that of controls, while FA was not. This mismatch was interpreted as indicative of extensive demyelination in the absence of axonal loss (Spano et al., 2010). A similar approach was followed by Narayanan et al. (2006), who combined quantitative MT (as a marker of myelin density) with NAA/Cr from 1 H MRS (as a marker of axonal loss) to study the brain of patients with MS. In this small sample, both BPF and NAA/CR were found to be altered compared to healthy controls, but no correlation was found between the two, suggesting that axonal damage is not strictly related to demyelination outside of visible lesions. Coupled with examples from similar studies, these confirm that multiple contrasts can be complementary. Nevertheless, the aim of multi-modal imaging is to go beyond the acquisition of multiple modalities analysed "in parallel", and to combine the different parameters to obtain novel biomarkers, greater than the sum of their parts. By doing so, it should also become possible to account for the collinearity of several MRI measures. So, what is the optimal way of combining them, and what are the obstacles that may prevent us from achieving this goal?
How to acquire multiple modalities The first challenge of a multi-modal MRI protocol is in the acquisition strategy. There are essentially two alternative approaches that can be followed. The first one is to independently acquire the modalities of interest, while the second one is to develop specialised acquisition sequences which allow weighting along more than one dimension (e.g., diffusion and T2), accompanied by analytical models able to disentangle the information provided by each single weighting.
The use of independent acquisition has the advantage of simplicity, with sequences often available as commercial products, already available quantitative models, and usually comparatively high signal-to-noise ratio (SNR). However, bringing together data from separate acquisitions is not without problems. This is especially relevant when the two modalities require different type of readout. dMRI is typically obtained using singleshot EPI, which suffers from geometric distortions (Jezzard and Balaban, 1995). As these effects are non-linear, simple image realignment or affine registration do not compensate for them, and sophisticated approaches are required to match them with data obtained from spin-warp acquisitions. An example of the impact this geometric mismatch can have on multi-modal protocol is given by Mohammadi et al. (2015), who combined dMRI and MT to compute the g-ratio, i.e., the ratio of the inner to the outer axonal diameter (Stikov et al., 2015) (discussed below). Due to susceptibility distortions, some voxels of the corpus callosum showed an unrealistically high g-ratio~1. More convincing values are obtained after correcting for distortions (see Fig. 1).
Depending on the combination of parameters of interest, this issue might be addressed by serial acquisitions that share the same basic readout. For example, multi-parameter mapping (MPM) (Helms et al., 2009;Weiskopf et al., 2013) collects 3 multi-echo spoiled gradient-echo sequences with predominant T1-, PD-, and MT-weighting, respectively. The multiple echoes can be used to obtain estimates of T2*, but also for averaging to boost the SNR of each acquisition. From the T1-and PDweighted scans it is straightforward to extract the amplitude of spoiled gradient echo (apparent PD) and T1. These quantities can then be used to derive the "MT saturation" (Helms et al., 2009), a phenomenological quantity that, albeit not absolute, reflects the density of macromolecular protons after removing the confounding effect of T1.
The idea of combining multiple gradient-echo sampling (for T2*decay) with other weightings was originally implemented for measuring T2 and T2* at the same time (and therefore T2') in the method originally called gradient echo sampling of FID and echo (GESFIDE) (Ma and Wehrli, 1996), and further developed into the gradient echo sampling of the spin echo (GESSE) by Yablonskiy and Haacke (1997). These are early examples of combined acquisitions that remove some of the problems associated with independent measurements.
When data are acquired for the purpose of estimating sub-voxel compartments, the geometric mismatch can have important consequences, as even small differences in the resolution of separate acquisitions can introduce bias, if images are differently interpolated (see Fig. 2). Joint acquisitions can address this issue. These acquisitions also tend to be more time-efficient than independent ones, although the complexity of the mathematical models might impose some constraints on signal-to-noise ratio (SNR). Examples were introduced already in the late 90s, although restricted to in vitro experiments, due to the required scan time.
Early studies in excised tissue (Andrews et al., 2006;Peled et al., 1999; attempted to establish the relationship between T2-species and diffusion behaviour using some variant of the diffusion-weighted (DW) Carr-Purcell-Meigoom-Gill (CPMG) sequence. This acquisition consists of a standard pulsed-gradient spin-echo (PGSE) preparation (van Dusschoten et al., 1996) followed by a CPMG train of 180 pulses. An echo is collected after each refocusing pulse thus mapping the effects of T2 decay. By altering the amplitude and the direction of the diffusion gradients, it is possible to modulate the amount of diffusion weighting and to evaluate anisotropy. Although a 2D-spectrum could in principle be obtained through 2D inverse Laplace transform, in practice this is an ill-posed problem. Alternatively, a T2-spectrum can be obtained for each diffusion weighting separately, enabling the apparent diffusion coefficient (ADC) of the water compartments corresponding to each T2 peak to be studied. Andrews et al. (2006) further modified this approach by adding a double inversion recovery (DIR) preparation to the DW-CPMG. This allowed them to selectively suppress the signal from non-myelin components exploiting T1 compartmentalisation instead of T2. The results obtained in these separate experiments were fairly Fig. 1. Example of susceptibility-induced geometric distortions in the single-shell dMRI data and their effects on the estimated MR-based g-ratio map. The MR g-ratio and contrast-inverted b ¼ 0 maps (ib0) from the original (A,B,F,G) and susceptibility-distortion corrected dMRI data (C,D,H,I) of a representative subject were compared to the subjects' MT map (E,J), which did not suffer from susceptibility artifacts. The spatial mismatch between anatomical structures in the single-shell dMRI and MT data (see contours in red) was strongly reduced after susceptibility correction. The susceptibility-related mismatches between uncorrected dMRI and MT maps led to a severe locally varying bias in the g-ratio maps [e.g., crosshair highlights one of the voxels with an unrealistic g % 1 at the edge of the genu ( inconsistent, particularly with respect to anisotropy: some studies support the notion that the shortest T2 component (identified with myelin water) is strongly anisotropic in diffusion (Andrews et al., 2006), while others do not . One of the possible motivations for the incongruence is in the choice of the total number of T2 peaks to be modelled, which differed among these studies. An additional complication is constituted by the fact that T2 compartments do not necessarily correspond to diffusion compartments. For example, while intra-and extra-cellular water compartments are expected to have differing diffusion coefficients and behaviour (Assaf and Basser, 2005;Stanisz et al., 1997), it is still under debate whether their T2 can be distinguished (Bjarnason et al., 2005;Whittall et al., 1997). Therefore characterising the diffusion properties of a specific T2 peak might not have a straightforward interpretation.
Despite these limitations, the approach remains valid in principle, and attempts to measure diffusion and T2 at the same time have recently gained momentum, after the suggestion that DTI parameters might be TE-dependent (Qin et al., 2009). While this might be simply explained by SNR considerations (i.e., the difference in DTI parameters is caused by a bias due to decreasing SNR at larger TEs), the most intriguing interpretation is that TE determines the relative contribution of each water compartment to the global signal. This view, however, suggests that a comprehensive description of water compartments can only be gained by developing joint models that take into account diffusion and T2 behaviour at the same time. Advances in hardware enable increasingly shorter TEs to be used in conjunction with diffusion weighting (Fan et al., 2016). Using such a system, Tax et al. (2017) obtained a dataset with b values ranging from 500 to 7 000 smm À2 , and TE ranging from 47 to 127 ms. Their data confirmed a dependency of DTI indices on TE, and suggested that the combined diffusion and T2 spectrum might be resolved in the human brain, providing novel information about water compartments in white matter. Novel computational approaches able to resolve 2D spectra have also been proposed (Benjamini and Basser, 2016).
Another interesting example of combined acquisitions was proposed by De Santis et al. (2016), and combines inversion recovery (IR) with dMRI. This sequence was developed for the purpose of mapping T1 (exploiting the IR preparation) along specific white matter tracts, even when fibers cross within a single voxel (exploiting dMRI). The feasibility of this approach in vivo was demonstrated, although the scan time remains too long for clinical translation. With future advances in hardware and sequence design, similar schemes are likely to become more manageable and open the possibility of other contrast to be incorporated in a single acquisition.
How to combine multiple modalities
The strategies used to combine MRI techniques can be broadly classified into 2 categories: data-driven approaches, which rely on multivariate and/or machine learning methods; and model-driven approaches, which attempt to combine parameters extracted by existing biophysical or signal models to obtain new parameters, which are believed to be more accurate or more specific to a given substrate than the original ones.
Data driven methods
The most informative way of combining MRI parameters is not immediately obvious. A simple but effective way is using linear regression or similar methods. The interdependence between MRI parameters in this case becomes useful and can be exploited to maximise the amount of information derived from multi-parametric protocols. One parameter that is sensitive to several biological substrates, such as T1, can be modelled as a linear combination of other MR parameters, each used as a surrogate of one or more of these substrates. The unexplained, or residual, variance is then assumed to measure the tissue component which was not modelled by any of the surrogates. A few examples of successful application of linear regression can be found in the literature. Ciccarelli Fig. 2. Effects of resampling raw data to fit with multi-compartment models. These images show the percentage difference in the estimated NODDI parameters when the raw data are downsampled (top row) or upsampled (bottom row) before performing the fitting. The resulting maps were compared with maps that were resampled after the fitting. The original voxel size was 2.5 Â 2.5 Â 2.5 mm 3 . While these effects are small, it is conceivable that combining 2 or more multi-component models that undergo different degrees of resampling might introduce non-negligible errors.
M. Cercignani, S. Bouyagoub NeuroImage 182 (2018) These examples highlight one of the potential problems with MRI biomarkers, namely their collinearity. T1 tends to correlate with T2 and MT-derived quantities; T2* and MT might share some variance: overall there is some overlap between the quantities that we are hoping to use to measure differing underlying pathology. Some of these data-driven approaches attempt to remove this collinearity, and to isolate the unique contribution of each specific technique.
More sophisticated approaches exploit multivariate methods, which provide tools able to reduce the dimensionality of the data and to extract from them some "latent variables" that better represent the characteristics of the object under study. Examples include principal component analysis (PCA), independent component analysis (ICA), and factor analysis, all of which re-express the data into a series of components obtained as linear combinations of the original observations. The difference between these 3 methods is in the criteria used to define these linear combinations. Two or more MRI modalities might be differently sensitive to several microscopic properties of tissue at the same time. Applying a data reduction approach might help to identify the common "latent" source of contrast, ideally related to a specific substrate. A nice example is provided by the multivariate myelin estimation model (MMEM) proposed by Mangeat et al. (2015). Assuming that T2* and MTR are both sensitive to myelin content, but also affected by other factors such as iron content and tissue orientation (T2*) or inflammation and pH (MTR), they combined them using ICA to identify their shared information, assumed Fig. 3. Example of data-driven multi-modal application: multivariate myelin estimation model (MMEM). MMEM aimed to estimate a cortical myelin map using MTR, T2*, cortical thickness (CT) and B0 orientation maps. The MMEM was divided into two steps. Firstly, two maps were estimated using multi-linear regressions: one using MTR, CT and B0 orientation (ME_MTR) and one using T2*, CT and B0 orientation (ME_T2*). ME_MTR and ME_T2* maps represent myelin-correlated values corrected for partial volume effect and fibers orientation. In order to merge MTR and T2* within the same framework, both linear regressions were performed with a common dependent variable (BMM). Secondly, the shared information between ME_MTR and ME_T2* was extracted using ICA decomposition, for each subject. The ICA decomposed the signal into two components that are mathematically independent. The 'so-called' first component of the ICA was the source that shares the highest variance between ME_MTR and ME_T2*, the hypothesis being that the first component of the ICA was an indicator for myelin content. Reproduced with permission from Mangeat et al. (2015). Copyright 2015 Elsevier Inc.
to reflect only myelin density in the human cortex (Fig. 3). These methods are relatively simple to implement and potentially useful for MRI modality combination. However, it must be noted that they are unable to distinguish between the intrinsic variability of a parameter due to the underlying microstructure, and the variability dependent on measurement error and image inhomogeneity. In addition, the interpretation of the resulting components is not always straightforward, and in some cases the latent variables may remain elusive in their meaning.
Machine learning (ML) approaches constitute a more advanced class of computational methods, which can be used to associate a combination of MRI techniques with a range of microstructural features (Lemm et al., 2011). ML methods rely on training an algorithm (most commonly some kind of classifier) to identify the features of interest using real data (Ashburner and Kloppel, 2011). Once trained, the classifier can be used on previously unseen data. Although there are no published examples to date, in the context of multimodal imaging, animal or post-mortem data could be used to associate a combination of MRI parameters with a specific tissue substrate validated with histology, and then translated into clinical applications. One of the limitations of ML algorithm is that they require a very large number of observations in order to produce reliable associations (i.e., in order to be appropriately trained). This might not always be possible in the context of biological samples.
Model-driven methods
This family of methods differs from multivariate models because it attempts to combine parameters extracted by existing biophysical or signal models to obtain new parameters, which are believed to be more accurate or more specific than the original ones. Biophysical models refer to those that explain the MR signal as a function of biological propertiesexample: axon diameter distribution (Alexander et al., 2010;Assaf et al., 2008); whereas the signal models explain the MR signal using mathematical or statistical propertiesexample: diffusion kurtosis (Jensen et al., 2005).
A simple example of this kind is driven equilibrium single pulse observation of T1 and T2 (DESPOT1 and DESPOT2)a method for quantifying T1 and T2 based on steady-state free precession (SSFP) sequences (Deoni et al., 2003). The SSFP signal equation is a function of both T1 and T2, as both longitudinal and transverse magnetization are brought into dynamic equilibrium through the application of repeated RF pulses (Young et al., 1986). In order to disentangle T1 and T2, an independent measure of either one is required. T1 is thus estimated using spoiled gradient echo at variable flip angles (Bluml et al., 1993), thus enabling T2 to be extracted. The method can be generalised to assume multiple water components, characterised by separate relaxation times (Deoni and Kolind, 2015). The multi-component version (mcDESPOT) yields maps of the fractions of myelin water as well as of intra and extra-cellular water (which cannot be distinguished using this method), and has been used in multiple studies to characterise myelination and other microstructural properties of tissue (Combes et al., 2017;Deoni et al., 2012;Kitzler et al., 2012;Kolind et al., 2013).
The sensitivity of relaxometry to myelin can be further exploited by combining these techniques with dMRI. While dMRI is highly sensitive to tissue geometry and integrity, the long echo times typically required to achieve sufficient diffusion weighting result in no signal contribution from the fast decaying myelin component. In principle the complementarity of the 2 techniques can be exploited to obtain separate estimates of the volumes of myelin, extra-cellular and intra-cellular spaces. Their complementarity derives from the fact that multi-compartment models of dMRI can easily separate the intra-cellular and the extra-cellular volume fractions, but typically are not sensitive to myelin while mcDESPOT does not distinguish intra-and extra-cellular spaces and measures their volume fractions as a combined sum. Bouyagoub et al. (2017) proposed a simple model that requires the separate acquisition of mcDESPOT and neurite orientation dispersion and density imaging (NODDI (Zhang et al., 2012),) to yield separate the intra-cellular and extra-cellular volume fractions maps along with myelin maps (Fig. 4).
As discussed in the joint acquisition section, multi-compartment models of diffusion such as NODDI provide estimates of tissue fraction that are relaxation-weighted. This means that acquisition parameters such as TE might affect the results, but also that in the presence of abnormal T2 values, the contribution of T2 and diffusion changes to abnormal volume fractions cannot be disentangled. Even in the healthy Fig. 4. Model-driven example of multi-modal approach. By exploiting the sensitivity of NODDI and mcDESPOT to different water compartments (top), it is possible to derive volume fraction maps for intracellular, extracellular, CSF and myelin water (Bouyagoub et al., 2017).
M. Cercignani, S. Bouyagoub
NeuroImage 182 (2018) 117-127 brain, CSF, which constitutes the bulk of isotropic diffusion volume fraction, has much longer T2 than parenchyma, potentially affecting the estimation of the isotropic diffusion component in white matter. Combining T2 and diffusion in a single acquisition is feasible, as discussed above, but challenging. An alternative option is to independently reconstruct the corresponding T2 spectrum, and then to feed these values into the diffusion model. It was recently shown that combining mcDES-POT, which provides estimates of intra/extra-cellular T2 and CSF T2, and NODDI allows the latter to be adjusted to incorporate T2 values, accounting for different T2 from different compartments and thus removing the bias (Bouyagoub et al., 2016). See Fig. 5 for an example. The local magnetic susceptibility is known to be affected by the orientation of white matter fibers with respect to the main magnetic field. Other MRI contrasts that reflect iron and myelin characteristics might have some dependency on the orientation of microstructure. Combining these techniques with DTI enables the orientation-dependency of these parameters to be investigated and modelled. This has been done for QSM , relaxometry (Gil et al., 2016)and the macromolecular pool absorption lineshape in MT (Pampel et al., 2015).
A clear example of the augmented information provided by combining two MRI techniques rather than focusing on a single contrast is given by the g-ratio framework (Stikov et al., 2015), extensively reviewed by other papers in this issue. The g-ratio is equal to the ratio of the inner-to-outer diameter of a myelinated axon, and is known to be associated with the speed of conduction along the axon (Rushton, 1934). A method for estimating the so-called "aggregate g-ratio" was proposed, building on simple geometric considerations and exploiting the respective sensitivities of MT imaging to myelin, and of dMRI to intra-axonal water fraction (Stikov et al., 2015). Similar approaches based on combining dMRI and relaxometry have also been suggested (Melbourne et al., 2014). Because of its link with conduction velocity, the g-ratio can be directly linked to axonal physiology and function, and represents an ideal candidate tool for exploring the structure-function relationship. Soon after its first introduction, this framework has been applied to the study of brain maturation, and the variability within the healthy population Dean et al., 2016;Mohammadi et al., 2015).
An important observation with respect to model-driven approaches is that they heavily rely on the biophysical modelling that links the signal behaviour in each technique to biological parameters being correct. Any bias would propagate into the multi-modal model, sometimes in an unpredicted fashion. In addition to systematic bias, one must not forget that any MRI parameter is affected by noise, and such noise will propagate into the newly derived index resulting from their combination. In some cases this effect can be estimated using the propagation of error equation (Bevington and Robinson, 2002), which shows how combining 2 noisy measures may blur any signal beyond detection. A very simple example is the combination of R2(¼1/T2) and R2*(¼1/T2*) for computing R2': If the variances associated with R2 and R2* are, respectively, σ 1 and σ 2 , according to the propagation of error equation the variance associated with R2 0 is given by: (2)
MR fingerprinting
A completely new approach to multiparametric MRI is MR fingerprinting (MRF) (Ma et al., 2013). The MRF concept relies on the idea that unique signal evolutions can be generated for different tissues through the continuous variation of the acquisition sequence parameters. Once the data are collected, pattern recognition can be used to associate each signal evolution with a specific tissue defined by a dictionary containing signal evolutions from all possible combinations of parameters. This enables T1/T2 values to be associated with that specific tissue, as well as Fig. 5. Exploiting multi-modal data to correct model bias. Fig. 5A shows the isotropic component estimated by NODDI in a healthy participant. The isotropic fraction appears unrealistically high in the white matter, and one of the possible explanations for this is that NODDI fractions are relaxation-weighted. Bouyagoub et al. (2016) used each compartment's T2 estimated by mcDESPOT to correct for this, yielding the maps shown on Fig. 5B. other MR quantities (depending on the acquisition parameters that are varied during acquisition). This is conceptually comparable to matching a person's fingerprint to a database. MRF relies on the use of undersampling techniques, such as compressed sensing (Lustig et al., 2007) to make the acquisition feasible in terms of scan time. Although still in its infancy, MRF offers a series of advantages compared to standard MR acquisition. It allows the collection of multiparametric data in a short time; it is robust against field inhomogeneities and motion artifacts; and it is extremely versatile. Providing a detailed review of MR fingerprinting and its applications goes beyond the scope of this paper but we foresee an increasing role for this approach in both clinical and research applications of MRI.
MRI and nuclear medicine
Multi-modal imaging can be thought as a more general approach, going beyond the boundaries of MRI. By combining MRI with other, complementary, neuroimaging techniques, such neurophysiological and nuclear medicine methods, it is theoretically possible to characterise more complex systems. For example, positron emission tomography (PET) can exploit ligands to specific neurobiological substrates to provide high specificity. Thanks to the spread of hybrid modalities, the marriage between MRI and nuclear medicine is likely to be a long and happy one. To date, examples of combined MRI and PET are still limited; nevertheless it was shown that biomarkers obtained by bringing together fluorodeoxy-glucose (FDG) PET, gray matter volumetrics and dMRI can better explain memory disorders in patients with AD than each single metric in isolation (Walhovd et al., 2009). Similarly, depressive symptoms in MS can be partially explained by functional connectivity of the hippocampus combined with imaging of PET ligands based on 8-kDa translocator protein (TSPO), sensitive to the activation of microglia, and thus to acute inflammation (Colasanti et al., 2014). More quantitative approaches are likely to be developed in the near future.
Conclusions
Although some degree of overlap exists between the information accessible through the main microstructural imaging techniques based on MRI, their complementarity ensures that their combination can be exploited to make the whole greater than the sum of its parts. While this is generally accepted and supported by a large body of data, the best way of bringing together different methods is still controversial. Multivariate methods offer a simple solution, but a somewhat complicated interpretation of the results. Joint models provide a more direct description of the microstructure but require more complex data acquisition strategies, a large degree of approximation, and are subject to a number of biases. Ultimately validation will be essential to understand the real potential of these methods, and their implementation will require the combined efforts of physicists, computational scientists and biologists. | 9,945.6 | 2017-11-04T00:00:00.000 | [
"Medicine",
"Physics"
] |
Analysis of the State and Development Prospects of the Irrigation Equipment Fleet in the Russian Federation
. The article presents the results of monitoring the availability and state of production of irrigation machinery and equipment in Russia. Based on the information and analytical studies carried out, data on the structure of the irrigation equipment fleet are presented, taking into account the availability and supplies of Russian and imported equipment. A complex of engineering, technical, organizational, and managerial measures has been developed, aimed at providing agricultural producers with domestic irrigation machinery, corresponding to the modern scientific and technical level of technology development in the world. A relevant issue is not only the conduct of experimental design and technological work on the development of a new generation of sprinkling equipment, which has the author's priority of the Russian Federation but also a comprehensive State policy is needed, which creates opportunities for Russian machine-building enterprises to organize and develop serial production, will reduce the dependence of the melioration industry on imported equipment, which is consistent with the goals of the Food Security Doctrine and the Import Substitution Strategy in the Russian Federation. AIC.
Introduction
The main zones of agricultural production in the Russian Federation are located in difficult natural and climatic conditions: a shortage of natural moisture supply is observed on more than 70% of arable land. Sustainable production of agricultural products in the arid climatic zone of the Russian Federation can be ensured only through the development of irrigation of agricultural lands [1,2].
The efficiency of using water, soil, climatic, material, technical, and energy resources, the ecological state of the environment, largely depends on the quality of technologies and irrigation techniques, which determines the quality of water distribution and regulation of the soil water regime, and therefore the yield of crops and the amount of unproductive losses of irrigation water.
An important risk factor for the further development of irrigated areas is the insufficient number of new Russian research and development projects on sprinkler equipment, introduced into production, in the presence of a significant share of foreign irrigation equipment. Give to foreign companies complete control of the solution of issues of technical assistance of the Subprogram for the development of the amelioration complex of Russia (hereinafter -the "Melioration Program") operating under the State Program for the Development of Agriculture, Regulation of Raw Materials and Food Markets, approved by Decree of the Government of the Russian Federation No. 717 dated June 14, 2021, as amended by the Government Resolution of the Russian Federation No. 415 dated March 18, 2021 (hereinafter -the "State Program of the Agroindustrial Complex"), in terms of construction, reconstruction and technical re-equipment of hydro-reclamation systems, since it contradicts the requirements of food and technological security of Russia [3].
Therefore, it is very important not only to develop design documentation for a new generation of sprinkler equipment, which has the author's priority of the Russian Federation but also to ensure serial production necessary for the development of irrigated agriculture, irrigation technology, and irrigation equipment development, corresponding to the world level of scientific and technological development [4,5].
To implement the import substitution program in the field of hydro reclamation, a State policy is required that creates opportunities for Russian machine-building enterprises to organize and develop serial production, which corresponds to the objectives of the State Program for the AIC Development and will reduce the dependence of the land reclamation industry on imported equipment.
Purpose
To analyze the engineering and technical level and technical and operational parameters of irrigation equipment involved in agricultural production. To substantiate the need and scientific and technological capabilities of Russian research organizations and machinebuilding enterprises to ensure the modernization of the machine-technological base of agricultural producers working on irrigated lands, on the basis of organizing the production of domestic irrigation equipment with a material and technical base, a complete set of technical documentation and author's rights for the organization of serial production of irrigation equipment.
Research methodology
FSBSI All-Russian Research Institute Raduga conducts research on monitoring the actual state, assessing the technical level and operation of hydro-reclamation systems, solving issues of development, production, implementation, and operation of new irrigation equipment. [6,7,8].
Scientific research used the following as a scientific and methodological base: promising developments of research and production organizations, works of foreign and domestic scientists in the field of technologies and technical means of irrigation; the results of scientific and technical activities in the field of technologies and equipment for sprinkling irrigation, obtained under the guidance and direct participation of the authors during the implementation of experimental design work [3,4].
FSBSI All-Russian Research Institute Raduga monitored the supply and availability of irrigation equipment in agricultural production and the state of domestic production for various types of technical irrigation equipment. For conducting monitoring, special forms of statistical observations on irrigation equipment were developed, based on the previously existing 2-mech form, including all types of irrigation equipment at the modern level. The research studied volumes of products, production potential, the presence of service centers in the regions of Russia, the completeness of the development of technical documentation for the production of equipment, the availability of patented technical solutions, certification, test results, as well as the supply of irrigation equipment from foreign countries according to the Customs Committee of the Russian Federation and analysis of the results of the implementation of the Reclamation Program in 2014 -2020.
Research results
The reclamation fund of the Russian Federation amounted to 9.46 million hectares at the beginning of 2020, [6.7] with the following structure of reclaimed land: • out of 4.68 million hectares of irrigated land, 3.89 million hectares were actually used in agricultural production, and irrigation was carried out due to the supply of water by state reclamation systems on an area of 1.41 million hectares and due to the initiative actions of agricultural producers for irrigation of local runoff waters of about 0.50 million hectares; The state reclamation infrastructure ensures the operation of hydro-reclamation systems with an area of reclaimed land of 3.8 million hectares, of which 2.9 million hectares on irrigation systems. State HMS supplies water through the main and inter-farm network for irrigation on the on-farm irrigation systems, with an area of about 1.41 million hectares, with a volume of water intake for irrigation of about 7.0 km3.
The structure of irrigated areas, according to irrigation equipment, is as follows: sprinkler equipment, 8 According to an expert assessment, about 1,690 wide-coverage sprinklers and 550 hose barrel sprinklers were imported in total in 2016-2020, which on average per year is at least 400 wide-coverage electrified sprinklers (WCES) and about 100 hose barrel sprinklers (HBS).
The total cost of imports is $9.87 million, while the average cost of the HBS is $18.50 thousand (the maximum price per unit is $23.80 thousand, the minimum is $10.80 thousand).
A total of 1,690 wide-coverage sprinklers (WCES) were delivered to Russia, excluding spare parts and equipment. Major foreign suppliers: Valley, Lindcey, TL, Reinke, Bayer.
The cost of import supplies is about $120.60 million, with an average cost of one WCES about $95.0 thousand (the maximum price for one WCES is up to $110.0 thousand, the minimum price is $75.0 thousand. Production of fast prefabricated pipes for irrigation. OOO "POLYPLASTIC Group" (Omsk) manufactures plastic pipelines and couplings, mobile irrigation kits from fast prefabricated pipelines for irrigation of areas of 5, 10, 15, 25, and 50 hectares. There are design and technical documentation, as well as technical documentation of the production cycle. Serial production was organized. The current production capacity is 500 mobile irrigation kits per year. Localization of production in the Russian Federation up to 80% (imports: couplings and fittings, sprinklers) Groups of regional dealers and service centers for equipment in the regions of the Russian Federation have been formed.
Drip irrigation systems. OAO "Tuboflex" (Uglich): Drip irrigation systems, components: drip tape with a production volume of up to 200.0 million linear meters, per year; start-connectors for drip tape -production volume up to 500.0 thousand pieces per year; repair fittings for drip tapeproduction volume up to 500.0 thousand pieces per year; pressure and inlet hoses for reclamation equipment -production volume up to 5.0 million linear meters per year. Serial production -drip tape deliveries in 2016-2018 to the regions of Russia over 300.0 million linear meters (about 20% of the market).
OOO "INTECO" (Rostov Region, Novoshakhtinsk). Drip irrigation systems, accessories: drip tape with a production volume of up to 420.0 million linear meters; components: start connectors, fittings, mini valves, rubber gaskets for drip irrigation systems -4.0 million pieces for each type of equipment per year; connecting fittings for LayFlat more than 300.0 thousand units. Serial production, the volume of manufactured products amounted to 280.0 million linear meters.
ZAO "New Age of Agrotechnology" (Lipetsk Region, Chaplygin). Drip irrigation systems, accessories: drip tape with a production volume of up to 300.0 million linear meters per year. Technical and technological documentation is available in the full volume necessary for serial production. Own production base. Serial production, the volume of manufactured products amounted to 280.0 million linear meters.
The main nomenclature of technical means and equipment for irrigation of land plots of agricultural producers using irrigated land, offered on the Russian market by Russian and foreign manufacturers: • pneumatic-tired wide-coverage sprinklers of circular and frontal action with an electric drive, operating in automatic mode from a closed network, the area of irrigation per season is from 10-50 to 200 hectares; • hose barrel sprinklers with medium-range devices or cantilever cars with low-pressure devices, service area per season from 3 to 30 hectares; • portable quick-assembly sprinkler pipelines made of aluminum or plastic, service area per season up to 50 hectares; • a wide range of equipment, including sprinklers operating at a pressure of 0.3 to 0.5 MPa, low-pressure sprinkler nozzles operating at a pressure of 0.1-0.2 MPa, shut-off and control hydraulic fittings, pressure and flow regulators, booster pumps, and power pumping equipment, special devices for applying fertilizers with irrigation water, computer systems and technical means for automatic irrigation control; • for drip irrigation systems: drip tubes and tapes with pressure compensated and uncompensated drippers, connecting fittings kits, flexible Layflat (LFT) pipelines with a diameter of 2-4" with a working pressure of 4-9 atmospheres, сontrol and shut-off valves, fine and coarse filters, of various capacities, fertilizer application units, air release valves, control, and automation systems for the irrigation process, as well as spare parts and accessories for equipment installation; • for micro-sprinkling systems: a wide range of micro-sprinklers operating at pressures from 0.15 to 0.35 MPa, low-pressure sprinklers operating at a pressure of 0.1-0.2 MPa, racks and nozzle holders, shut-off and regulating fittings, pressure regulators and booster pumps, pumping and power equipment, special equipment for applying fertilizers with irrigation water, computerized irrigation control systems.
All irrigation equipment is focused on operation from a closed irrigation network, automated operation, multipurpose use, the use of computer monitoring and control systems, a wide range of modifications and options, maximum accounting of specific conditions of use. [10]
Result and discussion
An analysis of the commissioning of reclaimed land and the supply of sprinklers in previous years (2016-2019) shows that the market was mainly closed due to the supply of imported equipment. According to an expert assessment, about 1,690 wide-coverage sprinklers and 550 hose barrel sprinklers were imported in total in 2017-2020, which on average per year is: at least 350 wide-coverage electrified sprinklers (WCES) and more than 100 hose barrel sprinklers (HBS). This equals up to 5 billion rubles per year. Since only imported equipment is purchased everywhere, significant amounts of financial resources are withdrawn abroad, from the country's economy every year, including tax payments, as well as the possibility of creating jobs in the foreign economy and contributing to the social and economic development of foreign countries.
In fact, at the beginning of 2020, over the past three years, in the context of average annual supplies, Russian manufacturers increased their participation from 0% to 15% for widecoverage sprinklers, and for hose barrel sprinklers from 0% to 3% of the supplies volume of foreign equipment.
The cost of equipment from foreign manufacturers, with an expert assessment of information resources and offers from foreign manufacturers of these dealerships, is: widecoverage circular sprinklers -$75.0 -100 thousand (basic machine with a service area of up to 70 hectares per season); wide-coverage frontal sprinklers $90.0-120.0 thousand (basic machine with a service area of up to 70 hectares per season); hose barrel sprinklers with a hydraulic drive €35.0-42.0 thousand (basic machine with a service area of up to 30 hectares per season), drip irrigation systems -$20.0 -25.0 thousand (basic module with a service area of up to 10 hectares); stationary sprinkling systems -$25.0-30.0 thousand (basic module for servicing an area up to 10 hectares); micro-sprinkling systems $30.0-40.0 thousand (module -10 hectares), with practically identical technical and operational indicators of equipment from various foreign manufacturers.
An analysis of the pricing policy of domestic and foreign manufacturers shows that the difference between the cost of domestic and imported irrigation equipment is 400,000 -1,600,000 rubles, and the difference in the amount of subsidies is from 250,000 rubles in the case of hose barrel machines, and from 800,000 rubles when buying wide-coverage sprinklers. It is important to note that when purchasing imported equipment, not only the farmer or agricultural enterprise overpays, but also the Russian budget when paying subsidies. It turns out that subsidies for the purchase of imported machines generate tax payments in countries where a foreign manufacturer is located (USA, Austria, Spain, Italy, Germany, Turkey, UAE).
Analysis of the irrigation equipment market shows that the share of imported sprinkler equipment is growing because large-scale serial production of Russian sprinkler equipment has not been established in recent years mainly due to the lack of high-quality design and technological documentation. During all the years of the implementation of the Land Reclamation Development Program, the irrigated land was commissioned mainly due to the supply of imported irrigation equipment, which, on average, is more than 95% of the total amount of technical equipment on the commissioning irrigated land.
The main problems hindering the development and widespread introduction of domestic irrigation equipment: • lack of specialized sale centers (pre-sale preparation) and service maintenance -production base; • lack of floating funds from small and medium agricultural producers for the development of design specifications and estimates and the purchase of irrigation equipment; • orientation of large agricultural producers to imported irrigation equipment and technologies; • Problems of training engineers -hydraulic engineers, mid-level technical specialists, and workers for production, installation, operation of irrigation equipment.
Important negative factors hindering the development of domestic production are unfair competition of foreign manufacturers: dumping offers from foreign manufacturers, including those made in China under the brand name of well-known brands; lack of priority for the Russian manufacturer in the allocation of subsidies for the purchase of irrigation equipment, within the framework of the Land Reclamation Development Program; the negative reputation of domestic irrigation equipment, due to several manufacturers who supplied irrigation machines of poor quality, without subsequent service and those who left the market; lack of experience in research and production activities in the development and organization of production of irrigation equipment, promotion of irrigation equipment on the market.
The tasks of creating and developing domestic production are not solved (not closed) only in the field of research and development, but should be solved systematically with the implementation of a single complex of organizational, production, institutional, personnel problems and require appropriate scientific, methodological, regulatory and technical support, development of material, technical and resource base of scientific, educational and operational organizations of machine-building enterprises. [11] At the moment, Russian manufacturers are ready and able to fully meet the demand for irrigation equipment, stimulated by State support (subsidies) of agricultural producers within the framework of the Land Reclamation Program.
Conclusions
Domestic enterprises have the material and technical base, equipment, and the necessary set of design, technical and technological documentation for serial production, and, to a sufficient extent, copyright for operations on the territory of the Russian Federation. The development of domestic production and the creation of competitive irrigation equipment is associated with the implementation of a full cycle of scientific research and experimental design developments, solving the issues of the widespread introduction of domestic irrigation equipment.
To solve the problems of creating and developing domestic production, comprehensive measures of State support are needed, which should consist of the implementation of areas related to the development of reclamation, scientific and technical infrastructure, and strengthening the material and technical base of scientific institutions, as well as stimulating domestic production and creating a favorable competitive environment for domestic producers of irrigation equipment.
A planned load of production capacities is required for the development of domestic production, as well as State support within the framework of the Land Reclamation Program, by supplementing the Subsidy Procedure with the provision that "a subsidy to agricultural producers is provided when reclaimed land is put into operation, only in the case of using irrigation machinery and equipment of domestic production. State subsidies to agricultural producers for the use of foreign irrigation machinery and equipment are allocated only in the absence of Russian analogs." Consider and resolve the issue of establishing duties on irrigation machinery and equipment imported into the Russian Federation, including sprinklers, as has already been done in other industries of the Russian Federation. Provide for the clause "foreign irrigation machinery and equipment are not subject to duty in the absence of Russian analogs." On the other hand, it is necessary to toughen the responsibility of agricultural producers for the quality and technical level of design solutions and construction of on-farm irrigation systems, to oblige to report on the efficiency indicators of the operation of reclaimed lands and to restore the form of statistical reporting on the production of products on reclaimed lands and the technical condition of on-farm irrigation systems.
The implementation of a set of measures to support domestic manufacturers of sprinkler equipment will allow the implementation of the decisions of the Government of the Russian Federation on import substitution, ensure the observance of the social and economic interests of Russia and the solution of the tasks of the Doctrine of Food Security of the country, as well as will certainly lead to the development of production, the expansion of the tax base, the creation of thousands of new jobs, building new cooperative ties between Russian machine-building enterprises, which means increasing the efficiency of domestic machinebuilding production and replenishing the budgets of the Russian Federation at all levels with tax payments of Russian manufacturers. | 4,327.4 | 2021-01-01T00:00:00.000 | [
"Economics"
] |
Import Substitution as a Condition for Sustainable Development of Mining Regions
One of the promising directions for studying the indicators of sustainable development of the extracting region is the analysis of economic trends, the definition and expert evaluation of their values and their practical approval. The assessment of the whole complex of environmental, economic and social indicators of sustainable development is of obvious importance in monitoring, evaluating and adjusting regional targeted programs, developing concepts and programs for long-term socialand-economic development, analyzing their effectiveness. It will allow, if necessary, to correct specific directions of social-and-economic and ecological development of the region. In extracting clusters, the need for the development of an import-substituting machine building cluster is urgent. For its establishing it is necessary to organize an investment consortium of engineering products’ consumers, an international network of industry innovation, research-and-production and design firms, a regional agency for attracting and protecting investments in import substitution of industrial products, and a set of tax incentives for residents of the import-substituting cluster.
Introduction
The term "sustainable development" was introduced in wide use by the Prime Minister of Norway G. X. Brundtland who headed the International Commission on Environment and Development in 1987 [1].Sustainable development in this case was understood as long, continuous, stable, self-sustaining, responsible, safe development, ensuring the satisfaction of the current needs of the living generation without a threat to meeting the needs of future generations.
In the definition of sustainable development by G.H. Brundtland there are no differences between fundamentally distinct concepts of development and growth, which causes difficulties in its use -in his opinion, the development of peoples and countries of the world can continue indefinitely long time, and gross growth is limited by the potential capacity of the ecosphere, its ability to regenerate life support systems.Obviously, understanding sustainable development as "development without growth" will not be E3S Web of Conferences 41, 04007 (2018) https://doi.org/10.1051/e3sconf/20184104007III rd International Innovative Mining Symposium entirely correct.Therefore, X. Daley takes into account this aspect, which defined sustainable development as "socially sustainable development, in which gross economic growth should not exceed the carrying capacity of life support systems" [2].R.A. Post in this connection rightly points out that the concept of sustainable development is in fact an attempt to balance the two moral requirements of society: the need to meet the needs of the living generation in economic development and, at the same time, the need to achieve "sustainability" of this development, ensuring that the current generation is not going to "lay down" the future one for the sake of the present [3].Accordingly, the concept of "sustainability" is interpreted by him as an ongoing process of gradual development from one generation to another, as the level of reasonable well-being and quality of life maintained by generations as long as possible.
Materials and Methods
Substantially the concept of sustainable development is interdisciplinary and inter-sectoral in nature, including the ecological, economic and social aspects.The ecological component of sustainable development implies ensuring the integrity and viability of biological and physical natural systems, on which, in turn, the global stability of the entire biosphere depends.The economic component is aimed at optimizing the use of natural resources, using environmentally friendly technologies, creating environmentally friendly products, minimizing and processing waste.Finally, the social component implies the development of human capital, the maintenance of social stability, the equitable distribution of resources and opportunities among all members of society, reduction of the number of social conflicts p4 -6].
The United Nations Conference on Environment and Development (UNCED) in 1992 recommended that governments of all countries develop their own national strategy for sustainable development.Following this recommendation Russia in 1994 approved the "Main provisions of the state strategy of the Russian Federation for the protection of the environment and sustainable development", and in 1996 adopted the "Concept of the transition of the Russian Federation to sustainable development", which aims to ensure a balanced solution to social-and-economic problems and problems of preservation of a favorable environment and natural and resource potential in order to meet the needs of present and future generations [7-8.The first criterion for sustainable development in S. Murai's model is population growth of less than 0.5% per year.Annual growth of 1.0 -1.5% corresponds to the critical level and more than 2% -to the destructive one [5].Thus, the lack of population growth or even its decline is an indicator of sustainable development in this model, since it implies the absence of an increase in the anthropogenic load on the biosphere.Taking into account the correlation of the territory and population, the anthropogenic load in the Kemerovo region is relatively small.Whereas Kuzbass is the most densely populated region of Siberia, the density of its population according to data for 2013 was only 28.65 people/km2.At the same time, according to the Territorial Body of the Federal State Statistics Service for the Kemerovo Region [5] in Kuzbass since the mid-1990s there has been a natural decline in the population: thus, in 2009 the population of the region was 2773.0 thousand people, and in 2013 -2734.1 thousand people.At the same time, the level of natural population decline is consistently declining -from 2.6 to 0.9 people per 1000 population over the same period.Thus, according to this criterion, the Kemerovo region corresponds to the indicators of the sustainability of development and, at the same time, it demonstrates certain positive changes in the social-and-demographic situation which is even more significant from a strategic point of view [4][5]9].
The second criterion for sustainable development is the annual increase in gross regional product (GRP) from 3 to 5%; the growth of 8 -10% per year is interpreted as critical, more than 10% or less than 0 -as destructive.According to official statistics, the gross regional product of the Kemerovo region in 2009 was 576 billion rubles, in 2013 -767 billion rubles [5].The average annual increase in GRP calculated on the basis of the data for the indicated years, even with a relative decrease in 2009 and 2012, was 6.7% in monetary terms.Taking into account the inflation factor the growth of GDP did not actually exceed the criteria for sustainable development.In the forecast for the development of the region's economy for 2013-2015, the growth of GDP was fixed by 2 to 3% annually [5].Respectively, according to this indicator, the Kemerovo region meets the criteria requirements as a whole.
The third criterion is the level of deforestation, not exceeding 0.1% per year.The deforestation index of 0.5 -1.0% per year is considered critical, more than 1% -as destructive.According to the "Forest Plan of the Kemerovo Region", for the last interperiod (from 2008 to 2010) the total area of the forest fund lands increased from 5413.5 to 5423.6 thousand hectares, an average of 0.093% per year; the area covered by forest vegetation -from 5115.4 to 5128.2 thousand hectares, that is, it increased by an average of 0.125% per year [4].At the same time, according to the Territorial Body of the Federal State Statistics Service for the Kemerovo Region [4] the scale of artificial reforestation and afforestation demonstrates a noticeable growth -so, the area of afforestation in 2013 compared with the previous year increased by 4.7 times.Consequently, the Kemerovo region clearly meets the requirements of this criterion.
The fourth criterion is the relative area of forests, comprising more than 30% of the total area.The share of forests 15 -20% is considered as critical, less than 10% -as destructive.
As is known, a specific feature of the Kemerovo region is the wide spread of man-made landscapes, the formation of which is associated with open development of coal and other deposits -in fact, the whole territory from Mezhdurechensk to Anzhero-Sudzhensk is an alternation of coal mines and open pits and waste heaps, concentrating factories, hydraulic structures, etc.As a result of open mining, complex disturbance of lands occurs, and the area disturbed by mining works is at least 64.8 thousand tons.Simultaneously, up to 80% of the forests are affected by intensive industrial cutting-down [5].
The fifth criterion is the area of farm fields, which is more than 0.3 hectares per person.The area of 0.15 to 0.2 hectares per person in the S. Murai model is critical, less than 0.1 hectares is devastating.According to the Administration of the Kemerovo Region, in general, 2399 thousand hectares of agricultural land are in agricultural turnover, which is 27% of the total land area of Kuzbass, while the arable land occupies 1483 thousand hectares.The sowing area for the harvest of 2012 was 1048.8 thousand hectares.Respectively, Kuzbass belongs to the regions with a high level of plowing and for each inhabitant of the Kemerovo region there are 0.5 hectares of arable land [10].Thus the Kemerovo region fully meets the requirements of this criterion of sustainable development.
Results and Discussion
The problems of weak realization of sustainable development factors in the Kemerovo region's economy are due to its predominantly raw nature, and the lack of manufacturing industries with relatively low impact on the environment.
In the economy of "old industrial" Russian region -the Kemerovo region -despite the high level of urbanization and concentration of basic industries, the problems of import dependence, specific for the entire Russian economy, are deepening.This can be illustrated by the comparative dynamics of coal mining and exports of metal on the one hand, and the import of engineering products on the other (Figure 1).As follows from the data shown in Fig. 13, the steady growth in coal output in the region (by 109% over the period 1998-2015) is accompanied, on the one hand, by accelerated replacement of the means of production of coal mines and sections.On the other hand, since 2003, as the consequences of the 1998-1999 devaluation of the ruble having been over, there was a faster growth in the imports of machine-building products in Kuzbass (4 times in 2002-2012).In 2014, a new round of devaluation led to a fall in regional machine-building imports (from $ 1,360 to $ 322 million), but as early as 2015, its volumes increased by 8%.
At the same time, it is machine building that is the branch on which neo-industrial transformations of the region's economy depend, the starting point of which should be the technological modernization of extractive and basic industries.However, the machinebuilding industry of the Kemerovo region is experiencing significant problems and cannot come out as a "catalyst" for neo-industrial transformation.
The need for import substitution for the Kemerovo region economy is especially acute due to the considerable wear of the domestic means of production used and the predominance of imports in the least worn part of it.Thus, the large equipment of coal mines (excavators, drilling rigs, pumping stations, transformer stations) are worn out by more than 70%, whereas most of the equipment purchased in the last 10 years and having a wear level of less than 30% is imported.
Meanwhile, in Kuzbass there is a certain reserve for the formation of the institutions necessary for the initiation of neo-industrial import substitution.In particular, the Coordination Center for the Development of Import Substitution has been established in the region and a Regional Plan and a list of investment projects in this area have been created to address regional problems of the development of neo-industrial import substitution.Technological modernization is necessary, first of all, of extractive and basic industries, which, in turn, will create reserve of import substitution of manufacturing and high-tech industries.
E3S Web of Conferences 41, 04007 (2018) https://doi.org/10.1051/e3sconf/20184104007III rd International Innovative Mining Symposium As applied to the Kemerovo region economy, mechanical engineering, particularly its segment, which manufactures products for mining enterprises, rightfully belongs to such an industry.In this regard, we propose the creation of Kuzbass neo-industrial importsubstituting cluster, focused on the following tasks: 1. Deep reconstruction, technical and technological modernization of the region's industrial enterprises, a large part of which was created in the 1960s and 1970s and has the depreciation average level of fixed capital about 70%.
2. Breaking-off the endless circle of investment and production problems: significant physical wear and tear of fixed capital -a decrease in the international competitiveness of machine-building products -a reduction in sales volumes -a lack of investment resources for the modernization of production -further increase in physical wear and tear.This requires the formation of an Investment Program for the regional machine-building industry, reflecting the needs of the coal, chemical, metallurgical industries in the region in new equipment.
3. Increasing the degree of cooperation of machine-building enterprises in the region, which today is estimated at less than 10% and overcoming their technological disunity.An important prerequisite for the development of cooperative relations within the cluster is the presence in the region of the Association of Machine-builders of Kuzbass, which unites about thirty companies in the Kemerovo, Novosibirsk and Tomsk regions.
To form a neo-industrial import-substituting machine-building cluster in Kuzbass, we consider the following to be necessary.
First, to draft a development strategy for this cluster, linking investment in the technical re-equipment of coal, chemical and engineering industries, instrumentation, government support and guarantees for attracting long-term loans.
Second, to make the list of residents of the cluster, including both machine-building enterprises and the subjects of the financial and scientific, research and educational spheres of the region, as well as foreign developers of modern machine-building technologies and software.
Third, to provide the institutional support for the creation of an import-substituting cluster in the region: -the organization of an investment consortium, which should include the main consumers of engineering products -enterprises of coal, chemical, machine-building industries; -the organization of a regional Agency for attracting and protecting investments in the import substitution of machine-building products; -the organization of a guarantee fund for investments in engineering, the main role in which the regional administration should play.
Four, to make the list of regional taxes benefits provided to residents of the machine building cluster.
Conclusion
Thus, the outcome of the mining region on the trajectory of sustainable development requires the initiation of a neo-industrial import substitution.To do this, it is important to form innovation-industrial clusters of a special type -a network, in which enterprises and research organizations from different clusters should be integrated into a single research and production complex based on the technological platform of import substitution.As the uniting elements the innovation and production infrastructure, as well as investment projects for the production of import-substituting products, covering all residents of the cluster should act.
Fig. 1 .
Fig. 1.Comparative dynamics of extraction and exports of raw materials and imports of engineering products in Kuzbass (logarithmic scale). | 3,351.6 | 2018-06-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
PDX1, a cellular homeoprotein, binds to and regulates the activity of human cytomegalovirus immediate early promoter.
Cellular homeoproteins have been shown to regulate the transcription of several viruses, including herpes simplex viruses, human papillomaviruses, and mouse mammary tumor viruses. Previous studies investigating the anti-viral mechanisms of several cyclin-dependent kinase inhibitors showed that the homeoproteins, pre B-cell leukemia transcription factor 1 (PBX1) and PBX-regulating protein-1 (PREP1), function as transcriptional activators of Moloney murine leukemia virus. Here, we examined the involvement of cellular homeoproteins in regulating the activity of the human cytomegalovirus immediate early (CMV IE) promoter. We identified a 45-bp element located at position -593 to -549 upstream of the transcription start site of the CMV IE gene, which contains multiple putative homeoprotein binding motifs. Gel shift assays demonstrated the physical association between a homeodomain protein, pancreatic-duodenal homeobox factor-1 (PDX1) and the 45-bp cytomegalovirus (CMV) region. We further determined that PDX1 represses the CMV IE promoter activity in 293 cells. Overexpression of PDX1 resulted in a decrease in transcription of the CMV IE gene. Conversely, blocking PDX1 protein synthesis and mutating the PDX1 binding sites enhanced CMV IE-dependent transcription. Collectively, our results represent the first work demonstrating that a cellular homeoprotein, PDX1, may be a repressor involved in regulation of human CMV gene expression.
It has been hypothesized that the CDKIs block viral replication by inhibiting the transcription of specific cellular genes that are required for viral infection. Our previous work tested this hypothesis using microarray technology and identified a cellular homeoprotein, pre B-cell leukemia transcription factor 1 (PBX1), as a target of the CDKIs and a required cellular co-factor for Moloney MLV replication (3). PBX1 was shown to form a heterodimer with another homeodomain protein, PBXregulating protein-1 (PREP1), and function as a transcriptional activator of Moloney MLV (3). The PBX1-PREP1 DNA binding motif, TGATTGAC, was further shown to be conserved in the long terminal repeats of 14 other murine retroviruses (3), suggesting the importance of homeoproteins in regulating retroviral transcription.
A number of cellular homeoproteins, including OCT-1, Brn-3a, and Brn-3b, as well as CCAAT displacement protein (CDP), have previously been shown to play a role in viral transcription ranging across several virus families. The HSV transcriptional activator VP16, for example, directs the formation of a multiprotein-DNA complex on a specific element found in the promoters of HSV immediate-early genes (10). This VP16-induced complex is composed of two cellular proteins, HCF-1 and a homeoprotein OCT-1 (11). OCT-1 contains a POU (Pit-Oct-Unc) DNA-binding domain, which is composed of an amino-terminal POU-specific domain and a carboxyl-terminal POU homeodomain (12,13). The OCT-1 POU homeodomain plays an important role in directing the binding and formation of VP16-induced complexes (14,15). Other POU family transcription factors, Brn-3a (also known as Brn-3.0) and Brn-3b (also known as Brn-3.2), originally identified in neuronal cells (16 -18) and subsequently observed in cervical cells, have been shown to regulate viral transcription of human papillomavirus-16 (HPV-16) and HPV-18 (19,20). Both factors were shown to directly bind to the upstream regulatory region of the virus genome, however the two factors exert opposing effects, with Brn-3a activating transcription directed by the HPV upstream regulatory region, and Brn-3b repressing E6 and E7 expression (20). Furthermore, a possible involvement of Brn-3a in the pathogenesis of HSV has also been suggested (21). The silencer CDP, the mammalian homolog of the Drosophila CUT protein and a homeoprotein, negatively regulates HPV-dependent transcription by binding to specific DNA elements located in the viral promoters and long control regions, thus repressing HPV replication (22)(23)(24). In addition, CDP has been shown to block the transcription of mouse mammary tumor viruses through binding to the negative regulatory regions located in the long terminal repeats of mouse mammary tumor virus (25)(26)(27).
Due to the emergence of a role for homeodomain proteins in regulation of viral transcriptional processes, we explored the possible involvement of these factors in regulating transcription of the -herpes virus group member, human cytomegalovirus (CMV). Expression of the human CMV immediate early (IE) gene is critical for productive viral replication (28,29). It has been shown that the replication and transcription of human CMV depends on the cellular differentiation status of the host cell. For example, in undifferentiated monocytes, little or no human CMV IE gene products are expressed, and CMV remains latent. Abundant IE gene products are produced and infectious CMV is generated when the monocytes differentiate to macrophages (30). The molecular basis for the repression of CMV IE gene remains largely unknown. In this report, we test a panel of homeoproteins, including HOXA9, PBX1, PREP1, MEIS1, and pancreatic-duodenal homeobox factor-1 (PDX1), a protein previously shown to complex with PBX1 in the regulation of elastase and somatostatin gene expression (31)(32)(33), for their ability to modulate the activity of human CMV IE promoter. We identified a 45-bp element located within the promoter of the CMV IE gene, which contains 12 putative binding sites for these homeoproteins. Our results indicate that multiple cellular proteins bind to the 45-bp CMV element and that PDX1 is present in this specific protein-DNA complex. We further demonstrate the functional significance of this interaction in cell-based assays that measure CMV IE-dependent transcription. PDX1 was found to negatively regulate the activity of the CMV IE promoter in 293 cells, and mutations in the PDX1 binding motifs alleviated these repressive effects. From these experiments we conclude that PDX1 binds to a specific region of the human CMV IE promoter and represses the promoter activity.
EXPERIMENTAL PROCEDURES
Cells and Reagents-293, 293T, HeLa, PANC-1, and THP-1 cells were obtained from American Type Culture Collection. Differentiation of THP-1 cells was achieved with supplementation of growth media with 10 nM phorbol 12-myristate 13-acetate (PMA, Sigma) for 2 days. 293 cells were transfected with a CMV-firefly luciferase plasmid, and stable integrants were selected using 5 mg/ml blasticidin (Invitrogen) to generate the CMV-Luc stable cell line, 293-CMV-Luc.
Plasmids and Plasmid Construction-The Pdx1 (accession number: X99894) complementary DNA (cDNA) was reverse-transcribed with gene-specific primers using the Qiagen OneStep RT-PCR (reverse transcription-PCR) kit as prescribed by the manufacturer. In brief, the cDNA product was synthesized and amplified from 1 g of purified human pancreatic poly(A) mRNA (Clontech) using two primers (5Ј-AATAGGATCCGCCGCAGCCATGAACGGCGA and 5Ј-CTCCTCTA-GACTCTCATCGTGGTTCCTGCG). The resultant product was inserted into the multiple cloning site of pUB6/V5-His B (Invitrogen) using the appended BamHI and XbaI (underlined) restriction sites to generate pUB-PDX1, in which the expression of Pdx1 was driven by the human ubiquitin C promoter. Two PDX1 mutants, pUB-PDX1H189F and pUB-PDX1I192Q, were created using the QuikChange site-directed mutagenesis kit (Stratagene) as described previously (34). A pUB-PREP1 plasmid was generated by cloning the coding sequences of Prep1 into the pUB6 vector using CMV-PREP1 as the DNA template (3). The pCITE-PBX1a, pCITE-MEIS1, and pCITE-PREP1 plasmids were constructed as described previously (3). The pCITE-PBX1b and pCITE-PDX1 plasmids (including wild-type and mutant Pdx1) were created by cloning the coding regions of Pbx1b and Pdx1 into pCITE vectors using CMV-PBX1b and pUB-PDX1 (wild-type or mutant Pdx1) as templates (3). The pCITE constructs were used to synthesize translated proteins in vitro using TNT Quick Coupled Transcription/Translation Systems (Promega). Three luciferase plasmids were used in this study: ubiquitin-firefly luciferase (pUB-F-Luc), CMV-firefly luciferase (CMV-F-Luc; containing the human CMV IE promoter-enhancer), and CMV-Renilla luciferase (CMV-R-Luc; containing the human CMV IE promoter-enhancer). The firefly luciferase sequences of pGL2 control (Promega) were subcloned into pUB6 or pcDNA6 (Invitrogen) to generate pUB-F-Luc or CMV-F-Luc. CMV-R-Luc was purchased from Promega (i.e. pRL-CMV). Mutations and deletions of the human CMV IE promoter were constructed using the QuikChange site-Directed mutagenesis kit (Stratagene). All CMV-Luc mutants were generated using CMV-R-Luc as template. To mutate the potential homeoprotein binding tetramers, the middle two nucleotides were changed to two cytosines (for example, TAAT to TCCT).
EMSAs-Three different DNA fragments of the 45-nucleotide CMV sequences were used as the DNA probes for EMSAs: CMV1 (5Ј-GGCAT-TGATTATTGACTAGTTATTAATAGTAA), CMV2 (5Ј-AATAGTAAT-CAATTACGGGGTCATTAGTTCA), and CMV12 (5Ј-GGCATTGATTAT-TGACTAGTTATTAATAGTAATCAATTACGGGGTCATTAGTTCA), which contain the first half, second half, and entire regions of the 45-bp region, respectively. Electrophoretic mobility shift assays (EMSAs) were performed as described previously (3) using CMV1, CMV2, or CMV12 as the DNA probes. Briefly, anti-PBX1, anti-MEIS1, anti-PREP1, anti-PDX1, anti-HOXA9, or anti-SP1 antibodies (Santa Cruz Biotechnology, Inc.) were incubated with nuclear extracts prepared from 293 or HeLa cells for 10 min at room temperature before the 32 P-labeled probe was included. When in vitro translated proteins were used in EMSAs, DNA binding reactions were performed at 4°C for 30 min. DNA and DNA-protein complexes were resolved on 5% non-denaturing polyacrylamide gels at room temperature in 0.3ϫ TBE (27 mM Tris-borate, pH 8.3, 0.6 mM EDTA). Following electrophoresis, the gels were dried and exposed to x-ray film.
Transfection and Luciferase Assays-293 or 293T cells were grown to 50 -80% confluence in 96-well plates. Transfections were performed using FuGENE 6 (Roche Applied Science) or LipofectAMINE 2000 reagent (Invitrogen) as described in the manufacturers' manuals. For the overexpression assays, 293 or 293T cells were co-transfected with CMV-R-Luc, the indicated expression vectors (i.e. pUB-PDX1, -PDX1H189F, -PDX1I192Q, or -PREP1) and a pUB-F-Luc internal control plasmid for 48 h. In the CMV mutant assays, 293 and 293T cells were co-transfected with pUB-F-Luc (as internal control) and a wildtype or mutant CMV-R-Luc. Firefly and Renilla luciferases were measured using the Dual-Glo assay system (Promega), and the activities were determined using an Acquest multimode reader (LJL Biosystems, Inc.).
Short interfering RNAs (siRNA) targeting positions 29 -47 (Pdx1-1 siRNA) or 554 -572 (Pdx1-2 siRNA) of the Pdx1 open reading frame and positions 20 -38 of the Prep1 open reading frame (accession number: XM_033008) were purchased from Qiagen. A fluorescein-labeled luciferase GL2 duplex was obtained from Dharmacon and used to determine transfection efficiency. An siRNA targeting Renilla luciferase was used as a control (35). 293 and 293T cells were co-transfected with CMV-F-Luc and the appropriate siRNA (i.e. Pdx1, Prep1, or R-Luc siRNAs). The effects of the siRNAs on firefly luciferase expression and activity were measured using the Bright-Glo assay system (Promega) 48 h after transfection. The inhibitory effects of the Pdx1 and Prep1 siRNAs on cellular PDX1 and PREP1 protein synthesis were further examined by Western blot analysis using anti-PDX1 and anti-PREP1 antibodies (Santa Cruz Biotechnology, Inc.).
PDX1 Associates with a Specific Region of the Human CMV
IE Promoter-Our previous work identified two cellular homeodomain proteins, PBX1 and PREP1, as transcriptional activators of Moloney MLV (3). To further investigate a possible role for homeoproteins in the transcriptional regulation of other virus families, we examined the promoter-enhancer regions of the human CMV IE gene for consensus homeoprotein binding elements. A 45-nucleotide fragment located at position Ϫ593 to Ϫ549 upstream of the transcription start site of the CMV IE gene was found to contain numerous putative homeoprotein binding sites (Fig. 1). Including the reverse complementary sequences, 12 potential tetramer binding sites for homeoproteins were identified, including two PBX1 (i.e. TGAT) (36,37), two PREP1 or MEIS1 (i.e. TGAC) (36 -38), six PDX1 (i.e. TAAT) (39,40), and eight HOX binding sites (i.e. TAAT or TTAT) (41-43) (Fig. 1). It has been shown that PDX1 associates with the B element of the transcriptional enhancer of the pancreatic elastase I gene by forming a trimeric complex with PBX1b and MEIS2 in pancreatic acinar cell lines, whereas PDX1 binds to the B element alone in -cell lines (32). It has additionally been observed that cooperative interactions occur between PBX1, PREP1, and PDX1 on the somatostatin mini-enhancer (31).
Based on these observations, we performed EMSAs to examine the possible association between the 45-bp CMV element and homeodomain proteins, including PBX1, MEIS1, PREP1, and PDX1.
The 45-bp region of interest was divided into two segments extending from position Ϫ593 to Ϫ571 (CMV1) and from position Ϫ570 to Ϫ549 (CMV2). Incubation of 32 P-labeled CMV1 DNA with nuclear extracts prepared from 293 cells resulted in the formation of a major DNA-protein complex, complex C ( Fig. 2A; indicated as "C"). The specificity of the complex for CMV1 binding was confirmed using unlabeled specific competitor oligonucleotides (i.e. unlabeled CMV1; Fig The interaction between the CMV1-region and the associated protein complex was then examined using antibodies specific to the individual homeoproteins. Incubation of nuclear extracts with antibodies raised against PDX1 proteins resulted in the appearance of a supershifted complex ( Fig. 2A, arrow). However, the incubation of nuclear extracts with antibodies against PBX1, MEIS1, or PREP1 proteins did not produce any significant new species ( Fig. 2A). A parallel set of EMSAs performed using CMV2 as the DNA probe yielded the same complex C as indicated in Fig. 2A. The further addition of anti-PDX1 antibodies to the reaction mixture resulted in a supershift of complex C (Fig. 2B, arrow). No effects on the DNA-protein complex were seen using anti-PBX1, -MEIS1, or -PREP1 antibodies (data not shown). When the same EMSAs (CMV1 and CMV2 as DNA probes) were performed using nuclear extracts prepared from HeLa cells, instead of 293 cells, identical results were obtained. In agreement with our findings using 293 nuclear extracts, the supershifted C complex was observed only when anti-PDX1 antibodies were included in the reaction (data not shown).
It has been shown that HOXA9 can form a trimeric protein complex with PBX1-MEIS1 or PBX1-PREP1 (43,44). Because there are eight potential HOX binding sites present in the 45-bp CMV region, we investigated the possible association between HOXA9 proteins and the CMV DNA element. EMSAs were thus performed, incubating anti-HOXA9 antibodies with nuclear extracts. The presence of HOXA9 antibodies did not, however, affect the formation of the CMV DNA-protein complex (data not shown).
To further validate the EMSA data using nuclear extracts and antibodies, we also performed EMSAs using in vitro translated PBX1a, PBX1b, MEIS1, PREP1, HOXA9, and PDX1 proteins. The results showed that only the PDX1 protein was able to bind to the CMV1 DNA probe (Fig. 2C). PDX1 further did not appear to cooperate with the other homeodomain proteins in associating with CMV1 (Fig. 2C). Furthermore, the complex of CMV1-PDX1 (Fig. 2C) was much smaller than that observed in the EMSAs using nuclear extracts ( Fig. 2A), suggesting that other cellular proteins are also present in the CMV1-protein complexes. An identical set of results was obtained when CMV2 was used as the DNA probe (data not shown). Lastly, the in vitro synthesized HOXA9 protein did not appear to interact with the CMV DNA either as a monomer, or in combination with the other homeoproteins as a dimer (i.e. PBX1-HOXA9, MEIS1-HOXA9, or PREP1-HOXA9), or trimer (i.e. PBX1-MEIS1-HOXA9 or PBX1-PREP1-HOXA9) (data not shown). Collectively, these gel shift experiments indicate that PDX1 protein is a component of a multiprotein complex that binds a region of the CMV IE promoter.
The expression of Pdx1 was initially described to be restricted to the pancreas and duodenum (45,46). It was later reported that Pdx1 is also expressed in the developing brain (47). However, the expression of PDX1 in 293 and HeLa cells is unknown. Western blot analysis was thus performed using nuclear extracts prepared from 293 and HeLa cells (Fig. 3). In vitro synthesized recombinant PDX1 protein, which contains additional amino acids at the N terminus of PDX1, was used as a positive control (Fig. 3). Nuclear extracts of the pancreatic cell line PANC-1 were also analyzed by Western blot. As shown in Fig. 3, the expression of PDX1 in 293 and HeLa cells was confirmed.
We next wished to examine the expression of PDX1 in the CMV host cells. It has been shown that the expression of human CMV IE gene does not occur in monocytes but is expressed in terminally differentiated macrophages (30). THP-1, a human monocytic cell line, can be induced to differentiate into macrophage-like cells through treatment with PMA (48). Nuclear extracts of untreated and PMA-treated THP-1 cells were analyzed by Western blot using anti-PDX1 antibodies.
The results indicate that PDX1 is expressed in both undifferentiated and differentiated THP-1 cells (Fig. 3).
PDX1 Binds to Multiple Sites within the Human CMV IE Promoter-We next wished to determine the exact PDX1 binding site(s) within the 45-bp CMV region. There are six putative PDX1 binding tetramers, TAAT, present in this region (Fig. 1). Of note, there are also two TTAT and two TGAT tetramers that contain one mismatch at a single position and could therefore comprise PDX1 binding sites. To determine which tetramer conferred association with PDX1, point mutations at each of the ten potential PDX1 binding sites (i.e. TAAT, TTAT, and TGAT) were generated, by changing the central two nucleotides to two cytosines. Six possible PDX1 sites are present within CMV1 (tetramer 1-6; Fig. 4A) and four in CMV2 (tet- ramer 7-10; Fig. 4C). EMSAs were thus performed using in vitro synthesized PDX1 protein and a wild-type or site-mutated CMV1 DNA probe. As shown in Fig. 4B, although mild loss of binding effects were observed for mutations at each site, mutations at tetramers 4 and 6 (both are TAAT) and to a lesser extent, tetramer 3 (i.e. TTAT), prevented PDX1 binding, suggesting that PDX1 binds to these regions preferentially over sites 1, 2, and 5. A similar approach was utilized to examine the potential PDX1 binding motifs in the CMV2 nucleotides (Fig. 4C). Only slight effects on PDX1-CMV2 complexes were observed when sites 7-9 were changed (Fig. 4D). On the other hand, a significant reduction in PDX1 binding was detected when a mutation was introduced into site 10 (which was TAAT), suggesting that PDX1 binds to site 10 versus tetramers 7-9 (Fig. 4D). To further validate the interaction between PDX1 and the 45-bp region of the CMV IE promoter, two PDX1 mutant proteins (i.e. PDX1H189F and PDX1I192Q), which contain mutations within the homeodomain and fail to bind to the PDX1 DNA motif, were generated and analyzed by EMSAs (34). As shown in Fig. 4E, both PDX1 mutants did not associate with the 45-bp region, which reconfirmed the specific interaction between PDX1 protein and the 45-bp element. Collectively, 7-10). Anti-SP1 antibodies were used as the negative control (lane 10). B, an identical set of EMSA was performed using radiolabeled CMV2 (containing the second half of the 45-bp CMV element, 5Ј-GTAATCAATTACGGGGTCATTA) as the DNA probe. The position of the supershifted complex C is indicated by an arrow. C, EMSA was carried out using the CMV1 probe and in vitro translated PBX1a, PBX1b, MEIS1, PREP1, and PDX1 proteins produced by a coupled reticulocyte lysate system. The first lane (indicated by an asterisk) contained lysates alone that did not include an expression construct and demonstrates the binding of endogenous complexes in lysates. The arrow indicates the binding of PDX1 proteins. Similar results were obtained when the CMV2 probe was used (data not shown). these experiments identify three TAAT sites (sites 4, 6, and 10) as major PDX1 binding sites in the CMV IE promoter.
The correlation between major PDX1 binding elements (i.e. sites 4, 6, and 10) and the formation of the CMV-protein complexes was also examined using 293 nuclear extracts instead of in vitro translated PDX1 protein in the assays. Mutations at sites 4 and 6 caused significant, but not complete, disruption of CMV1-PDX1 association, whereas mutation of site 10 completely abrogated CMV2-PDX1 complex formation (data not shown). These data using nuclear extracts suggested that
FIG. 4. Identification of PDX1 binding sites in the 45-bp CMV region.
A, potential PDX1 binding motifs present within CMV1 DNA are indicated, including three TAATs, two TTATs, and one TGAT. Six individual mutant CMV1 oligonucleotides were generated, each destroying a specific possible PDX1-binding tetramer (positions 1-6). For each mutant, the middle two nucleotides of the tetramers were changed to two cytosines (underlined). B, EMSA was carried out utilizing a wild-type (WT) or mutated oligonucleotide (mutations 1-6) combined with in vitro translated PDX1 proteins. The first lane contained the WT CMV1 DNA probe and lysates without the Pdx1 expression plasmid, whereas the PDX1 proteins were included in the remaining lanes. The complexes containing PDX1 proteins are indicated by an arrow. The signal intensities of the PDX1-CMV1 DNA complexes were quantitated using the NIH Java-based image-processing program (National Institutes of Health Image/J) as shown at the bottom of the gel. The quantitative data are presented as the percentage relative to the control, i.e. the PDX1-WT CMV1 complex. C, four potential PDX1 binding motifs are present within CMV2 DNA, including three TAAT and one TGAT tetramers. Four individual mutant CMV2 oligonucleotides were generated, each of which destroyed a specific possible PDX1 tetramer (positions 7-10). The middle two nucleotides of the tetramers were changed to two cytosines (underlined). D, EMSA was performed using a wild-type (WT) or mutated oligonucleotide (mutations 7-10) and the in vitro translated PDX1 proteins. The first lane contained the WT CMV2 DNA and lysates without the Pdx1 expression plasmid, whereas the PDX1 proteins were included in the remaining reactions. The binding of PDX1 protein is indicated by an arrow, and the quantitation of the PDX1-CMV2 DNA complexes are shown at the bottom of the gel. E, the CMV12 oligonucleotide and the wild-type or mutant in vitro synthesized PDX1 proteins were utilized in the gel shift assay. The first lane contained the CMV12 DNA and lysates without the Pdx1 expression plasmid. The binding of wild-type PDX1 proteins is indicated by an arrow. PDX1 is likely not the only determinant in the formation of these CMV DNA-protein complexes. It is possible that other cellular factors present in the CMV-protein complexes may also participate in CMV DNA binding and the formation of these complexes.
PDX1 Negatively Regulates the Human CMV IE-dependent Transcription in 293 and 293T Cells-We further investigated the importance of PDX1 in the regulation of the human CMV IE promoter. 293T cells were thus co-transfected with a reporter vector (CMV-Renilla luciferase, or CMV-R-Luc) and an expression plasmid encoding PREP1, wild-type or mutant PDX1 (i.e. pUB-PREP1, pUB-PDX1WT, pUB-PDX1H189F, or pUB-PDX1I192Q). The ubiquitin-firefly luciferase plasmid (i.e. pUB-F-Luc) was utilized as an internal control for normalization of transfection efficiency. Overexpression of PDX1 resulted in a 33% decrease in CMV-dependent transcription, whereas no effects were detected when PDX1H189F or PDX1I192F were overexpressed in 293T cells (Fig. 5A). Furthermore, in agreement with the gel shift data (Fig. 2), no significant effects were observed when PREP1 was overexpressed (Fig. 5A). The cellbased assays were performed using 293 cells and a 52% decrease in CMV IE-dependent expression was detected when PDX1 was overexpressed (data not shown).
We also used the Pdx1-targeted siRNA to evaluate the effect of removing the PDX1 protein on CMV-mediated transcription. Two Pdx1 siRNAs (i.e. Pdx1-1 and Pdx1-2 siRNAs) were used in the assays. 293 cells were thus co-transfected with CMV-Luc and siRNAs directed against Pdx1 or Prep1. A 6-fold increase in CMV transcription was observed in the Pdx1-2 siRNA-transfected cells, whereas no effects were detected when treating with Prep1 or Pdx1-1 siRNA (Fig. 5B; data not shown). The identical experiments were carried out using 293T cells, and similar results were obtained (data not shown). The Pdx1-2 and Prep1 siRNAs did not cause any significant effects on cellular toxicity or proliferation as determined by Alamar Blue cell viability assays (data not shown). We also generated a stable 293 cell line expressing luciferase from the human CMV IE promoter (293-CMV-Luc cells) and performed a set of experiments identical to the ones described above. Overexpression of PDX1 caused a 30% decrease in CMV-dependent transcription (data not shown). In addition, a greater than 3-fold increase in luciferase activity was observed when 293-CMV-Luc cells were transfected with Pdx1-2 siRNA (data not shown).
The effects of Pdx1 siRNAs on endogenous PDX1 protein levels were determined by Western blot analysis (Fig. 5C). As shown in Fig. 5C, Pdx1-2 but not Pdx1-1 siRNA induced a significant reduction in steady-state PDX1 levels within the cell. A 40 -50% reduction of cellular PREP1 protein was also observed when cells were treated with Prep1 siRNA (data not shown). These findings likely understate the effect of the siRNAs under these conditions, because the transfection efficiency of these siRNAs on 293 cells was determined to be around 50% using fluorescein-labeled luciferase GL2 siRNA (data not shown).
To further investigate the importance of PDX1 in regulating the activity of the CMV IE promoter, mutations in each identified or putative PDX1 binding site were generated using site-directed mutagenesis and tested for differences in expression (Fig. 6A). 293T cells were transiently transfected with a CMV-Luc plasmid, which contained either the wild-type CMV IE promoter or a mutation in the indicated PDX1 site. As shown in Fig. 6B, five mutations resulted in a greater than 2-fold induction in luciferase activity, including four TAAT tetramers (sites 5, 6, 9, and 10). A subset of these mutants (sites 6 and 10) was also identified as comprising major PDX1 binding sites in the EMSA experiments (Fig. 4, B and D). A CMV-Luc construct with double mutations at sites 6 and 10 was also generated and tested, however, no additive effects on luciferase activity were detected (Fig. 6B). Similar results were obtained when 293 cells were utilized in transfection assays (data not shown).
We next wished to determine the effects of deleting regions of the 45-bp element on repression of the CMV IE promoter. Three CMV-Luc deletion constructs, CMV-Luc-del-1, CMV-Luc-del-2, and CMV-Luc-del-3, which lack the first 11, first 33, or the entire 45 bp of the 45-bp region, were generated (Fig. 7A). CMV-Luc-del-1 retains all three major PDX1 binding sites, whereas CMV-Luc-del-2 and CMV-Luc-del-3 lack two or three of the identified PDX1 sites, respectively (Fig. 7A). 293 and 293T cells were transiently transfected with the indicated CMV-Luc deletion mutant or a wild-type CMV-Luc, which served as a control. No significant effects on the luciferase activity were detected on the CMV-Luc-del-1 construct, however, CMV-Luc-del-2 and CMV-Luc-del-3 resulted in up to 2.5fold increase in CMV IE-dependent expression (Fig. 7B).
We further wished to determine if the effects of PDX1 on the CMV IE promoter was restricted to the 45-bp region under study. 293T cells were transiently transfected with a PDX1 expression vector and the wild-type or del-3 CMV-Luc reporter constructs. As seen previously (Fig. 5A), the overexpression of PDX1 caused a 60% decrease in luciferase expression driven by the wild-type CMV IE promoter. When the CMV-Luc-del-3 plasmid was alternatively used in the assay, the repressive effect of PDX1 overexpression was found to be less modest, resulting in a 40% inhibition (Fig. 7C). Although these results confirm that the 45-bp region contains PDX1 binding sites that may partially account for the PDX1-mediated repression, they also suggest that there might be additional PDX1 binding motifs outside the 45-bp region. Indeed, in our more recent analyses of the CMV IE promoter, we have identified additional PDX1 binding sites upstream of the 45-bp element (data not shown). Taken together, our findings suggest that PDX1 represses the promoter activity of the human CMV IE gene in 293 and 293T cells. DISCUSSION The human CMV IE promoter, one of the most potent RNA polymerase II promoters identified thus far, is commonly used for overproduction of recombinant proteins in a wide range of mammalian cells. Here we demonstrate that a cellular homeodomain protein, PDX, binds to the CMV IE promoter and down-regulates its activity in 293 and 293T cells. The identification of a specific 45-bp element, which spans nucleotides Ϫ593 to Ϫ549 of the CMV IE promoter and contains multiple putative homeoprotein binding sites, led to the discovery of PDX1 involvement in CMV-dependent transcription. Sites within this 45-nucleotide region demonstrated varying degrees of PDX1 binding by EMSAs (Fig. 4). In cell-based reporter gene assays, ectopic expression of PDX1 resulted in significant reduction in CMV IE-dependent luciferase activity, whereas PDX1 knockdown by the siRNA caused a 6-fold increase in transcription (Fig. 5, A and B). Furthermore, an increase in CMV IE promoter activity was observed when the PDX1 DNAbinding motifs were mutated or the 45-bp region was deleted ( Fig. 6 and 7). Collectively, these data imply a novel role for PDX1 as a cellular regulator of the human CMV IE promoter.
The region upstream of the CMV IE enhancer between Ϫ750 and Ϫ550 has previously been described as a unique region containing multiple NF-1 binding sites (30,49,50). All of the identified NF-1 binding sites are located upstream of position Ϫ602 (49,50). Interactions between endogenous nuclear proteins and the nucleotide sequence between Ϫ660 and Ϫ540 revealed five specific nuclear protein binding regions, including sequences between Ϫ602 and Ϫ557, Ϫ563 and Ϫ540, and Ϫ602 and Ϫ582, by DNase I protection experiments (49). However, the identities of these cellular proteins have not been reported. The association between the cellular homeoprotein PDX1 and the CMV 45-bp region (between Ϫ593 and Ϫ549) demonstrated here presents a novel finding in understanding CMV biology.
The importance of this unique region in human CMV IE transcription and viral replication has been investigated previously. Recombinant CMV containing deletions from positions Ϫ640 to Ϫ583, which include the putative homeoprotein binding sites 1, 2, and 5 ( Figs. 1 and 4A), did not result in any significant effects on IE transcription (51). However, a deletion of the sequence between Ϫ521 and Ϫ579, which spans tetramers 3, 4, 6, 7, 8, 9, and 10, including the preferred PDX1 binding (28). In these experiments, viral replication was shown to be comparable to that of wild-type CMV at high m.o.i., however, the replication rates of the mutant viruses were significantly increased at low m.o.i. after 6 days post-infection (52). The IE gene products, IE1 and IE2, have been shown to be required for initiating viral replication (28,29,53,54). The results from our study and the previous recombinant CMV assays (51,52) suggest the possible involvement of PDX1 in human CMV replication through the transcriptional regulation of IE gene expression. Perhaps the inhibitory effects exerted by PDX1 on CMV IE transcription, and thus replication, are masked at high m.o.i. levels of wild-type CMV relative to PDX1, whereas at low m.o.i., PDX1 is not limiting, and the repression is manifest.
In the human CMV genome, the unique region is located between the IE enhancer and the UL127 open reading frame. It has been demonstrated that the unique region negatively regulates the expression from the UL127 promoter and may function as a boundary that efficiently blocks enhancer-promoter interactions (51,55). Ghazal and co-workers showed that a boundary domain, which contains the nucleotides from Ϫ532 to Ϫ598 (including the entire 45-bp element investigated in our work), confers repression on a heterologous promoter when placed between the enhancer and promoter (55). These previous findings help to explain how the CMV IE enhancer could selectively activate the expression of CMV IE gene, instead of UL127 promoter. However, the repressive mechanism of the boundary domain on viral transcription still remains unknown. Our finding of association between the cellular protein PDX1 and the 45-bp region provides a novel insight to investigate the regulation of the boundary domain on CMV IE promoter activity.
To further investigate the importance of the PDX1 binding motifs, CMV-Luc constructs harboring mutations within individual PDX1 binding sites or deletions within the 45-bp element were generated and examined in a cellular context (Figs. 6 and 7). However, unlike the data obtained from Pdx1-2 siRNA-treated cells (Fig. 5B), no greater than a 3-fold increase in the activity of CMV IE promoter-enhancer was observed even with the double mutations at two major PDX1 binding sites (sites 6 and 10) (Fig. 6). It is likely that PDX1 could bind to other non-mutated PDX1 binding sites within the 45-bp region and still participate in complexes that partially repress CMV-dependent transcription. Alternatively, there might be other PDX1 binding sites located outside the 45-bp region. Indeed in our current study of the human CMV IE promoter, the additional PDX1 binding region was identified upstream of the 45-bp element (data not shown). A greater increase in CMV IE transcription may be expected if all PDX1 binding sites present in the promoter-enhancer region could be eliminated. Furthermore, our results also demonstrated that PDX1 might not be the only factor that determines the formation of the specific CMV-protein complexes (Fig. 2). It is possible that one or more other unknown proteins present in the CMV DNAprotein complex might contribute binding energy and partially compensate for losses in PDX1-DNA binding affinity.
PDX1 has been shown to function as a transcriptional activator of several genes, including insulin (56,57), Glut2 (39), glucokinase (58), islet amyloid polypeptide (59, 60), and somatostatin (31). Our data demonstrate a unique function for PDX1 whereby PDX1 negatively regulates the activity of the CMV IE promoter. It is possible that the additional proteins present in the PDX1-CMV DNA complexes could contribute to the inhibitory function of the multiprotein complex on CMV IE transcription. Interestingly, Lundquist et al. (51) have identified a putative human papillomavirus (HPV) silencing motif at position Ϫ591 to Ϫ584 of the CMV IE promoter. It has been shown that the transcriptional repressor CDP binds to this silencing element and blocks transcription and replication of HPV (23). Future experiments will be carried out to examine the potential involvement of CDP in modulating the activity of the human CMV IE promoter. The identification of other unknown cellular proteins present in the CMV IE DNA-protein complexes could provide new insights into our understanding of the transcriptional regulation of the human CMV IE gene.
Although PDX1 functions as a transcription activator in pancreatic cells, its effects on other genes or in other cells/ tissues remain largely unknown. Here we show that PDX1 is also expressed in kidney (293 cells), cervix (HeLa cells), and monocytes (THP-1 cells) (Fig. 3). It is possible that the effects of PDX1 on transcriptional regulation could be tissue-specific and PDX1 could negatively regulate certain specific genes, including viral genes. However, the regulatory mechanism of PDX1 FIG. 7. Deletion analysis of the 45-bp region of the human CMV IE promoter. A, diagram of three CMV-Luc deletion constructs, CMV-Luc-del-1, CMV-Luc-del-2, and CMV-Luc-del-3, in which the first 11, first 33, or entire 45 bp are deleted. The three major PDX1 binding sites are underlined. B, 293 or 293T cells were transiently transfected with the indicated deletion CMV-Luc plasmids or wild-type (WT) CMV-Luc, which served as a control. Luciferase activity was measured 24 -30 h after transfection. C, 293T cells were transiently transfected with pUB-PDX1 and a CMV-Luc reporter plasmid, WT, or del-3. Luciferase activity was measured 48 h after transfection. on the activity of CMV IE promoters remains to be determined. PDX1 might directly affect CMV IE-dependent transcription through association with the CMV IE promoter and interfere with the interaction between the promoter and other transcription factors. Alternatively, PDX1 could also indirectly inhibit the CMV IE promoter by influencing the expression of the other cellular genes that are involved in the regulation of the CMV IE-dependent transcription.
We utilized 293 and 293T cells, which are non-permissive cell lines for human CMV but allow CMV IE-dependent transcription, in our study. Our data demonstrate that PDX1 is involved in the transcriptional repression of CMV IE-dependent transcription in 293 and 293T cells. It will be interesting to determine the effects of PDX1 on human CMV IE transcription in the CMV host cells, such as monocytes. Cells of the monocyte lineage serve as reservoirs of latent human CMV and as vehicles for disseminating viral infection (30). Here we showed that PDX1 is expressed in human monocytes raising the possibility that PDX1 could be involved in the regulation of human CMV transcription (Fig. 3). Future work will be performed to determine the functional role of PDX1 in human CMV transcription and replication. | 8,209.6 | 2004-04-16T00:00:00.000 | [
"Biology"
] |
Hoya thuathienhuensis and Hoya graveolens ( Apocynaceae , Asclepiadoideae ) , a new species and a new record for the Flora of Vietnam
A new species from the Annamite mountain range of central Vietnam, Hoya thuathienhuensis, is here described and illustrated. Its flowers bear similarity with Hoya lockii, a taxon recently described from the same area with which it shares the reflexed corolla and the thin coriaceous laves. Hoya lockii is an epiphytic shrub, whereas H. thuathienhuensis is a strong climbing liana. We also report the noteworthy extension into southern Vietnam of Hoya graveolens, a taxon previously considered to be endemic of Thailand.
Field investigations carried out in 2010 in central Vietnam resulted in the discovery of a further new taxon whose white reflexed corollas and thin coriaceous leaves superficially resemble those of the co-occurring Hoya lockii V.T.Phạm & Aver.(Phạm & Averyanov 2012) but its growth habit is unique among Vietnamese Hoya being a strong climbing terrestrial liana.In the present paper the new species, Hoya thuathienhuensis T.B.Tran, Rodda & Simonsson, is described and illustrated and its morphological affinities with other Hoya species are discussed.Additionally, Hoya graveolens Kerr, a taxon in need of lectotypification, is recorded for the first time for the flora of Vietnam.
Hoya graveolens was so far thought to be endemic to Thailand.Herbarium records indicate that apart from a single specimen collected in Nakorn Ratchasima Province ) the species appears to be limited to coastal areas in the proximity of Sriracha, the type locality.
Floristic investigations in the karst hills of Kiên Giang Province in southern Vietnam in 2007Vietnam in , 2008Vietnam in and 2009 by one of the co-authors resulted in the collection of three specimens later identified as H. graveolens, thus extending its distribution area eastward more that 500 km.The study of herbarium materials and the observation of the species in situ in Vietnam allowed us to prepare an extended description of the taxon and highlighted the need to select a lectotype.Etymology.Hoya thuathienhuensis is named after the only Vietnamese Province where it has been so far collected, Thừa Thiên-Huế Province in central Vietnam.
Habitat & Ecology -Hoya thuathienhuensis has been observed in secondary evergreen lower montane forest up to 20 m high, in proximity to primary forests.It is a strong climbing liana, rooted in the humus-rich forest floor.
Conservation status -The only locality where H. thuathienhuensis has been recorded is within the proposed conservation area of the Hue Green Corridor (Averyanov et al. 2006) thus supporting the long term preservation of the species in situ.Nonetheless, H. thuathienhuensis is to be considered Data Deficient (DD) according to IUCN Red List criteria (IUCN Standards and Petitions Subcommittee 2011) because it is known from only one collection and thus remains in need of further investigation with respect to future conservation efforts.
Notes -When H. thuathienhuensis was first observed in the field it was not in flower.From its vegetative structures, a strong terrestrial climber with coriaceous glabrous leaves, we first thought it may not belong to the genus.What made us initially think that H. thuathienhuensis may indeed belong to the genus are its extra-axillary, positively geotropic peduncles, bearing scars of previous flowerings thus diagnostic of the genus (Omlor 1996).After flowering in cultivation it became clear that the taxon belongs to Hoya and bears superficial similarities with co-occurring H. lockii, the only other Hoya species observed in the Hue Green Corridor apart from Hoya carnosa R.Br.so far.It can be easily separated from H. lockii because of the growth habit: H. thuathienhuensis is a strong climbing liana while H. lockii is an epiphytic shrub.Further, the corolla lobes are similarly reflexed and densely hirsute underneath the corona in both species but in H. lockii they are also slightly pubescent elsewhere and ciliate while in H. thuathienhuensis they are glabrous and brilliant; the corona lobes are ovate-round with a short acute inner process depressed among the prominent outer processes (Fig. 1) while in H. lockii the corona lobes are laterally compressed and present a prominent inner process bearing a caudate, erect or slightly converging or diverging appendage with a linear, slightly recurved apex, extending 2-4 mm above the apex of the outer process.
The general habit of the plant, a strong terrestrial climbing liana, as said, is rare in Hoya and usually typical of species belonging to Hoya sect.Eriostemma.Species in this section are characterised by mainly terrestrial, non-epiphytic habit, inflorescences on short-lived or deciduous peduncles and large and fleshy corollas with proportionally small coronas (Wanntorp et al. 2011).No species of Hoya sect.Eriostemma are known to occur in Vietnam.Pollinaria in species belonging to sect.Eriostemma consist of club-shaped pollinia lacking pellucid margins, attached to the retinaculum by twisted and winged caudicles (Wanntorp et al. 2011).With its perennial peduncles, smaller flowers and oblong pollinia with a clearly distinguished pellucid margin attached to the retinaculum by short broad caudicles (Fig. 1h), H. thuathienhuensis does not belong to sect.Eriostemma.Its sectional placement is uncertain and on the basis of pollinaria shape, it falls in a phylogenetically unsupported group of species bearing narrow elongate pollinia longer on one side, pellucid margin along the entire dorsal edge, a narrowly rhomboid retinaculum apically pointed and unwinged caudicles (Wanntorp 2007) whose sectional placement will need to be re-evaluated following recent molecular evidence (Wanntorp et al. 2011).Hoya graveolens Kerr (1939) 461. -Typus: Kerr 4245, Thailand, Sriracha, 15 May 1920 (lecto BM, designated here; iso K, P).
Hoya graveolens
Hoya graveolens was described from a specimen collected in Sriracha, Thailand by Kerr in 1920.Kerr selected as type his collection Kerr 4245 but did not indicate in which herbarium it was deposited thus a lectotype has to be selected.We found duplicates in BM, K and P and we hereby select the BM specimen as lectotype since it is a complete specimen with leafy stems and flowers.It is also the only specimen belonging to Kerr's own herbarium (labelled: Herb.A.F.G.Kerr -Bequeathed 1942) and bears pencilled drawings and measurements of the flowers likely by Kerr's hand.
Distribution -So far only observed on three small karst hills along the coast in Kiên Giang Province, southern Vietnam.
Habitat & Ecology -Terrestrial or lithophyte, decumbent or weakly climbing, found in shaded areas at 0-110 m asl.The prevailing climate in this area is monsoon subequatorial climate with an annual rainfall of 2100 mm and average temperature of 27.4 °C (Nguyễn et al. 2000).This species has been observed flowering in Vietnam from April to May, in accordance with the flowering period observed in Thailand (March to May).
IUCN assessment -In Vietnam, H. graveolens has been observed only on the isolated karst hills of Kiên Giang Province in a total area of less than 3 km 2 .This area is being destroyed by human impacts such as fuel wood cutting, small scale agriculture, and exploited for cement production, making the species likely to become threatened in the future.Based on the IUCN Red List criteria (IUCN Standards and Petitions Subcommittee 2011), we assign a provisional conservation status of Near Threatened (NT). | 1,615.6 | 2013-03-31T00:00:00.000 | [
"Biology"
] |
A Potential New Source of Therapeutic Agents for the Treatment of Mucocutaneous Leishmaniasis: The Essential Oil of Rhaphiodon echinus
Weeds are an important source of natural products; with promising biological activity. This study investigated the anti-kinetoplastida potential (in vitro) to evaluate the cytotoxicity (in vitro) and antioxidant capacity of the essential oil of Rhaphiodon echinus (EORe), which is an infesting plant species. The essential oil was analyzed by GC/MS. The antioxidant capacity was evaluated by reduction of the DPPH radical and Fe3+ ion. The clone Trypanosoma cruzi CL-B5 was used to search for anti-epimastigote activity. Antileishmanial activity was determined using promastigotes of Leishmania braziliensis (MHOM/CW/88/UA301). NCTC 929 fibroblasts were used for the cytotoxicity test. The results showed that the main constituent of the essential oil was γ-elemene. No relevant effect was observed concerning the ability to reduce the DPPH radical; only at the concentration of 480 μg/mL did the essential oil demonstrate a high reduction of Fe3+ power. The oil was active against L. brasiliensis promastigotes; but not against the epimastigote form of T. cruzi. Cytotoxicity for mammalian cells was low at the active concentration capable of killing more than 70% of promastigote forms. The results revealed that the essential oil of R. echinus showed activity against L. brasiliensis; positioning itself as a promising agent for antileishmanial therapies.
Introduction
Protozoan parasites of the Leishmania genus causes a disease, known collectively as leishmaniasis, which affects numerous body systems (e.g., the epithelial and the muscular). About 1.3 million new cases and approximately 20,000 to 30,000 deaths from this pathology are reported annually. The disease is highly endemic in the Indian subcontinent and in East Africa, and more than 90% of cases occur in Bangladesh, Brazil, Ethiopia, India,
Essential Oil Obtained and Chemical Analysis
The essential oil of the dried leaves of R. echinus had a yield of 0.12%. A total of 21 compounds were identified from the chemical analysis of the essential oil, where most of the identified compounds belong to the sesquiterpenes class. Their retention indices and relative quantity, listed in order of elution, are shown in Table 1.
The analyses showed that γ-elemene, a hydrocarbon sesquiterpene, was predominant in essential oil, counting for more than 21.83% of its composition. Moreover, α-bisabolene (12.82%), caryophyllene oxide (10.61%), and germacrene D (10.30%) were also representative, with concentrations higher than 10%. Relative proportions of the essential oil constituents are expressed as percentages; a retention indices from the literature (Adams, 1995).
Visualization Analysis of Descriptor Networks
The software found 583 possible keywords, so it defined at least one occurrence as a minimum limit. However, only the components obtained in the essential oil were selected for the creation of the visualization network. In this case, the terms selected were: aromadendrene, caryophyllene oxide, cineole, copaene, germacrene D, spathulenol, α-humulene, β-cymene, β-pinene, δ-cadinol, "essential oil", Rhaphiodon, and "R. echinus". As shown in Figure 1, the system identified two clusters; we observed that, although each cluster complements the connections, they are very limited, in this case, it is a subject that still needs to be explored by the scientific community. Note: the size of the circle or node is equivalent to the occurrence number of the descriptor.
Effect of the Essential Oil on DPPH Radicals
In all concentrations (1,30,60,120,240, and 480 mg/mL) of the essential oil of Rhaphiodon echinus evaluated for its antioxidant activity, no relevant effect was observed on the ability to reduce the DPPH radical, so that the IC50 of the EORe was 742.80 μg/mL, while the IC50 of ascorbic acid, a potent antioxidant, was 41.51 μg/mL, about 18 times lower ( Figure 2A). Note: the size of the circle or node is equivalent to the occurrence number of the descriptor.
Effect of the Essential Oil on DPPH Radicals
In all concentrations (1,30,60,120,240, and 480 mg/mL) of the essential oil of Rhaphiodon echinus evaluated for its antioxidant activity, no relevant effect was observed on the ability to reduce the DPPH radical, so that the IC 50 of the EORe was 742.80 µg/mL, Figure 2A).
Figure 1.
Co-occurrence analysis of components obtained in the essential oil of Rhaphiodon echinus. Note: the size of the circle or node is equivalent to the occurrence number of the descriptor.
Effect of the Essential Oil on DPPH Radicals
In all concentrations (1,30,60,120,240, and 480 mg/mL) of the essential oil of Rhaphiodon echinus evaluated for its antioxidant activity, no relevant effect was observed on the ability to reduce the DPPH radical, so that the IC50 of the EORe was 742.80 μg/mL, while the IC50 of ascorbic acid, a potent antioxidant, was 41.51 μg/mL, about 18 times lower ( The power to reduce Fe 3+ to Fe 2+ of the essential oil of R. echinus ( Figure 2B) was similar in almost all concentrations (1,30,60,120, and 240 μg/mL) in the respective checking times (10, 20, 30, 50, 100, and 200), presenting lower absorbance values than those observed in the control curve (Fe 3+ ). Only at the concentration of 480 μg/mL did the essential oil demonstrate high reducing power at all times analyzed. After the addition of ascorbic acid (AA), the absorbance values increased in all groups, which is due to the high antioxidant power of this substance.
Cytotoxic Activity of R. echinus against Mammalian Fibroblasts
The cytotoxic potential of the essential oil from the leaves of R. echinus against NCTC929 fibroblasts is shown in Figure 3. The essential oil from R. echinus at concentration of 250 μg/mL completely killed fibroblasts, while the same effect was obtained with nifurtimox (used as reference) at concentrations ranging from 200 to 600 μg/mL (data not The power to reduce Fe 3+ to Fe 2+ of the essential oil of R. echinus ( Figure 2B) was similar in almost all concentrations (1, 30, 60, 120, and 240 µg/mL) in the respective checking times (10, 20, 30, 50, 100, and 200), presenting lower absorbance values than those observed in the control curve (Fe 3+ ). Only at the concentration of 480 µg/mL did the essential oil demonstrate high reducing power at all times analyzed. After the addition of ascorbic acid (AA), the absorbance values increased in all groups, which is due to the high antioxidant power of this substance.
Cytotoxic Activity of R. echinus against Mammalian Fibroblasts
The cytotoxic potential of the essential oil from the leaves of R. echinus against NCTC929 fibroblasts is shown in Figure 3. The essential oil from R. echinus at concentration of 250 µg/mL completely killed fibroblasts, while the same effect was obtained with nifurtimox (used as reference) at concentrations ranging from 200 to 600 µg/mL (data not shown). The order of effectiveness to kill the fibroblast was: nifurtimox (EC 50 = 83.04 µg/mL) > essential oil of R. echinus (EC 50 = 140.2 µg/mL) (Figure 3). similar in almost all concentrations (1,30,60,120, and 240 μg/mL) in the respective checking times (10, 20, 30, 50, 100, and 200), presenting lower absorbance values than those observed in the control curve (Fe 3+ ). Only at the concentration of 480 μg/mL did the essential oil demonstrate high reducing power at all times analyzed. After the addition of ascorbic acid (AA), the absorbance values increased in all groups, which is due to the high antioxidant power of this substance.
Cytotoxic Activity of R. echinus against Mammalian Fibroblasts
The cytotoxic potential of the essential oil from the leaves of R. echinus against NCTC929 fibroblasts is shown in Figure 3. The essential oil from R. echinus at concentration of 250 μg/mL completely killed fibroblasts, while the same effect was obtained with nifurtimox (used as reference) at concentrations ranging from 200 to 600 μg/mL (data not shown). The order of effectiveness to kill the fibroblast was: nifurtimox (EC50 = 83.04 μg/mL) > essential oil of R. echinus (EC50 = 140.2 μg/mL) ( Figure 3).
Bioactivity of EOHs for Promastigotes Forms of Leishmania brasiliensis and Epimastigotes of Trypanosoma cruzi
To investigate the leishmanicidal activity, promastigotes of L. brasiliensis were incubated in the presence of increasing concentrations of essential oil and cell viability was determined 48 h later. As shown in Table 2 and Figure 4, both parasite growths were inhibited by the essential oil. The calculations of LC 50 values obtained were 56.45 µg/mL and 5.72 µg/mL of essential oil and pentamidine, respectively. Our results indicate that the essential oil of R. echinus was less effective against the epimastigote forms of T. cruzi than against the promastigote forms of L. brasiliensis (Fig- Our results indicate that the essential oil of R. echinus was less effective against the epimastigote forms of T. cruzi than against the promastigote forms of L. brasiliensis (Figure 4), so that LC50 was observed only for the values of nifurtimox (3.04 µg/mL) at the concentrations tested (Table 2) in the anti-epimastigote assay.
The essential oil of R. echinus did not inhibit the growth of T. cruzi in any of the tested concentrations, when compared to the control. However, pentamidine was more effective as an anti-kinetoplastida agent than the essential oil of R. echinus, since the concentration required to kill 50% (LC50) of parasites was 5.72 µg/mL, while the calculation of LC50 for the essential oil of R. echinus was 56.45 µg/mL, in the case of L. brasiliensis.
Chemical Composition Analysis of the Essential Oil of R. echinus
In the essential oil of R. echinus, various substances are described (Table 1). These natural compounds found are responsible for the anti-inflammatory and antimicrobial actions [17]. Torres et al. [18] evaluated the chemical compositions of the essential oil from leaves and fruits of R. echinus by GC-MS and GC-FID and identified nineteen compounds, 93.8% in the leaf oil and 82.4% in the fruit oil. Both presented sesquiterpenes, of which (31.9% and 19.7%) bicyclogermacrene and (21.5% and 21.2%) trans-caryophyllene. Hyptis sideritis oil has a terpenic nature, containing monoterpenes and sesquiterpenes, similar to what was observed in Hyptis suaveolens (L.) Poiteau, whose essential oil was significantly toxic, with the difference that, in this last one, the main component identified was the oxygenated sesquiterpene β-caryophyllene [19].
In the present study, the major component identified (21.83%) was the sesquiterpene γ-elemene commonly assessed as larvicide [20]. The second most abundant compound in the essential oil of R. echinus (10.82%) was α-bisabolene, which is commonly used as a perfume component and as a precursor in several chemical synthesis pathways. In addition to these more common applications, bisabolene has also been shown to be suitable for the synthesis of biofuels for both land and air transport [21].
Visualization Analysis of Descriptor Networks
By analyzing the correlation network obtained using some terms of greater relevance to the present work (Figure 1), such as the compounds identified in the essential oil of R. echinus, few works have been carried out with the essential oil of this species, since none of the compounds identified here have been cited as keywords next to the name of this plant, in different research studies. In addition, we noticed that the compounds aromadendrene, caryophyllene oxide, cineole, copaene, germacrene D, spathulenol, α-humulene, and βpinene seem to be closely correlated, since they were already mentioned together (at least one pair at a time) in at least one job; this seems to be common [22,23].
Effects of the Essential Oil on DPPH Radicals
The results observed for antioxidant activity (Figure 2A) contrasted with those observed for the aqueous and ethanolic extracts of R. echinus, where the IC50 values in the DPPH radical reduction assay were 227.9 and 112.9 µg/mL, respectively. Differently, the essential oil of R. echinus also showed reducing power, supposedly inferior to the control (Fe 3+ ) in lower concentrations, corroborating the findings of the present study [13]. Similarly, it was reported that species of the Hyptis genus, such as H. suaveolens, produced essential oil with good antioxidant activity [24]. In the case of reducing power, the results ( Figure 2B) are similar to those of the work by de Oliveira et al. [25], where the reducing potential of the essential oil of Lantana montevidensis was quite high at the highest concentration tested (0.48 g/mL). In the work by Duarte et al. [13], the essential oil of R. echinus also showed reducing power supposedly inferior to the control (Fe 3+ ) in lower conceptions, corroborating the findings of the present study.
Cytotoxic Activity of R. echinus against Mammalian Fibroblasts
In this study, the cytotoxicity of R. echinus, using mammalian fibroblasts, was assessed; the calculation of EC 50 for the cytotoxic activity was 140.2 µg/mL, showing a slightly different profile from nifurtimox (Figure 3), the standard drug with EC 50 = 83.04 µg/mL, revealing that the essential oil tested was less active regarding cytotoxicity.
In similar tests with the same cell line, observed in the literature, concentrations <100% demonstrated similar behavior and were generally null [26,27]; therefore, we used higher concentrations, so that we visualized well-defined behavior characteristics of a dose-dependent response [28].
In studies by Monzote et al. [29], with EC 50 values between 12.8 and 63.3 µg/mL, the results were similar to those exhibited by the commonly used drugs. The main negative aspects of therapies for leishmaniasis and Chagas disease include high toxicity and high rates of development of resistance by the parasites. The resistance has been observed in vitro [30,31], and this may be associated with a decrease in mitochondrial membrane potential with reduced drug accumulation in prolonged therapies [32].
In eukaryotic cells, essential oils can cause depolarization of mitochondrial membranes by decreasing membrane potential, affecting the Ca 2+ ionic cycle and other ion channels and reducing the pH gradient, affecting the proton pump and the fusion of ATP (adenosine triphosphate) [33]. These cytotoxic properties are of great importance in the application of essential oils, not only against certain human or animal pathogens, but also in the preservation of agricultural products, including the control of mites [34].
It is believed that the presence of secondary metabolites is related to the plant defense [35]. For example, secondary products involved in plant defense through cytotoxicity to pathogens may be useful as antimicrobial drugs in humans if they are not too toxic [36] and can provide valuable information for the screening of natural products [37]. The toxicity involves the formation of pores along the artificial cell membrane of the parasite and host by modifying the selective permeability to cations, leading to cell death [38]. Glinma et al. [39] reported usage of thiosemicarbazones, such as N (4) and N-methyl (4) phenyl, showing antitrypanosomal activity proportional to its lipophilicity.
Bioactivity of EOHs for Promastigotes Forms of Leishmania brasiliensis and Epimastigotes of Trypanosoma cruzi
Comparing our results with those obtained by Barros et al. [40], using essential oil of Lantana camara, it is possible to extrapolate that essential oil R. echinus used in this study ( Figure 4 and Table 2) was slightly more effective against L. braziliensis (LC 50 = 56.45 µg/mL) than L. camara (LC 50 = 72.31 µg/mL). Senhilkumar et al. [41] found that the essential oil, containing γ-elemene as one of the major constituents, showed significant toxicity, with LC 50 = 71.71 ppm and LC 90 = 143.41 ppm.
The levels of monoterpenes and sesquiterpenes with reports of identified antiparasitic activity may have been responsible for the bioactivity of the oil against L. brasiliensis, as the leishmanicidal activity of these terpenoids has already been demonstrated [42,43]. Another component of the essential oil of R. echinus is the sesquiterpene caryophyllene oxide, which the literature reports as a significant antitripomastigote activity [44]. For Rondon et al. [45], the more effective action of oils may be due to the synergistic effect of other compounds, such as caryophyllene and cymene, against Leishmania.
The probable cause of the change in the response to exposure to EORe, observed here at concentrations greater than 31.5 µg/mL for L. brasiliensis, was the change in the cellular behavior of the promastigote forms, so that these must have changed their cellular physiology towards the amastigote condition, a form of resistance, probably due to an increase in the production of reactive oxygen species (ROS) in the intracellular environment [46].
Recent studies have demonstrated the activity of the essential oil component containing caryophyllene [47] and trans-caryophyllene against T. cruzi. The effect observed in this study is likely related to a possible synergistic action of the constituents, for how much the various compounds present in the essential oil of R. echinus occur show antiparasitic action. Essential oils of aromatic plants showed activity against promastigote (MIC 0.0097-0.1565 µL/mL) and axenic amastigote forms (LC 50 0.24-42.00 µL/mL) of both leishmania species [48].
Morais-Braga et al. [37], evaluating the anti-kinetoplastida activity of Lygodium venustum, found that promastigotes were more susceptible to the tested products than epimastigotes due to variability and specificity of cellular targets for each organism. In this study, the promastigotes were more susceptible to the tested product. For Nakamura et al. [49], an important criterion in the research of active compounds with therapeutic potential against L. amazonensis was to determine the absence of toxic effects on host cells. Comparing the concentration 62.5 µg/mL of greater inhibition with 70.81% of activity for R. echinus, with cytotoxicity in this same concentration, there was a total absence of toxic effects-they were not toxic to fibroblasts.
The results demonstrated that R. echinus showed no inhibition against T. cruzi (125 µg/mL). The toxicity of the essential oil of R. echinus is possibly related to the presence of sesquiterpenes. In line with this, Martínez-Diaz et al. [50] demonstrated that (E)-caryophyllene exhibited antiparasitic effects against T. cruzi. Similarly, Cheikh-Ali et al. [51] observed the activity of caryophyllene oxide against T. brucei.
Literature reports indicate that essential oils of various plants have shown promising antiparasitic activity against T. cruzi [37,52,53]. Saeidnia et al. [54] state that, among various compounds, sesquiterpene lactones showed potent antitrypanosomal effects with a proper selectivity index comparable to trypanocidal drugs. The authors also report that there is no report on the classification of active sesquiterpenes in relation to their activity against several intermediary trypanosomes, such as epimastigotes, trypomastigotes, and amastigotes.
Plant Material
R. echinus was collected in Crato, Ceará, Brazil. The plant material was deposited in the Cariense Dárdano de Andrade-Lima Herbarium of the Regional University of Cariri URCA under the number 7348 HCDAL.
Reagents
The sodium resazurin substance was obtained from Sigma-Aldrich (St. Louis, MO, USA) and stored at 4 • C protected from light. A resazurin solution was prepared with 1% phosphate buffer, pH 7 and was sterilized in advance by filtration. Afterwards, the
Preparation of the Essential Oil of R. echinus (EORe)
The essential oil of R. echinus was extracted from dried plant material subjected to hydrodistillation in the Clevenger apparatus. After the sampling, the leaves were placed to dry in the sun, crushed into small pieces, and then introduced into a volumetric flask of 1 L, where 300 mL of distilled water was added. The flask was attached to the Clevenger apparatus on a heating mantle and temperature adjustment was carried out until the water boiling point. After boiling, the count time (the 2 h extraction cycle) began. After each extraction cycle, the oil contained in the apparatus was collected with a pipette and stored in amber bottles and then refrigerated. After the extraction, sodium sulfate was used for removal of the aqueous phase present in the essential oil.
Chemical Composition Analysis of the EORe
The chemical composition of the essential oil of R. echinus was performed by gas chromatography coupled to mass spectrometry (GC/MS) using a Shimadzu equipment, QP2010 Series. The capillary column used was of the type Rtx-5MS measuring 30 mm long by 0.25 mm of diameter and 0.25 mm of film thickness. Helium was used as carrier gas at a rate of 1.5 µg/mL/min. The injector temperature was 250 • C and in the detector was 290 • C. The column temperature ranged from 60 to 180 • C increasing 5 • C/min and subsequently varied from 180 to 280 • C rising 10 • C/min. The essential oil was diluted in the proportion 1:200 in chloroform with 1 µL being injected. The mass spectrometer was set for an ionization energy of 70 V. The identification of individual components was based on their fragmentation of spectral mass according to their NIST Mass 08 spectral library, retention rates and comparison with published data [55].
Visualization Analysis of Descriptor Networks
As a recovery strategy, the descriptors "Rhaphiodon echinus" and "R. echinus" were searched for in the search fields on the Scopus web (Elsevier). In this perspective, a universe of 32 documents was found between 2009 and February 2021. From these, the results of keywords cited in the same works were extracted from Scopus (32 documents) and analyzed in VOSviewer. VOSviewer is a useful software for bibliometric and scientometric study, in this context, it allows the creation of visualization networks based on metadata [56]. In this case, the similarity visualization (VOS) method was used to analyze the co-occurrence of descriptors [57].
Cell Lines Used
Strains of CL-B5 (clone CL-B5) were used for in vitro evaluation of the activity on T. cruzi. Parasites transfected with the β-galactosidase gene of Escherichia coli (LacZ) were provided by Dr. F. Buckner through the Gorgas Memorial Institute (Panama). Epimastigote forms cultured in the LIT infusion tryptose liver at 28 • C plus 10% fetal bovine serum (FBS), penicillin 10 U/mL, and 10 µg/mL streptomycin at pH 7.2, were incubated with different concentrations of essential oil (125, 62.5, 31.25, and 15.62 µg/mL) and harvested during the exponential growth phase.
For the cytotoxic activity, the mammalian fibroblast strain NCTC clone 929 was used. The cells were dissolved in DMSO and diluted in RPMI 1640 medium (Sigma) supple-mented with 10% fetal bovine serum (FBS) inactivated by heat (30 min at 56 • C), penicillin G (100 U/mL), and streptomycin (100 µg/mL). The cells in the pre-confluence phase were harvested with trypsin. The cells were maintained at 37 • C on a humidified incubator with 5% CO 2 . Essential oil was added at concentrations of 500, 250, 125, 62.5, 31.25, and 15.62 µg/mL. The antileishmanial activity was tested in concentrations that were not toxic to fibroblasts. Trypanocidal activity and cytotoxicity were tested concomitantly. The radical scavenging ability of the essential oil of R. echinus was performed using the stable free radical DPPH (1,1 diphenyl-2-picrylhydrazyl) as described by Kamdem et al. [58], with some modifications. Briefly, 50 µL of essential oil of R. echinus at different concentrations (1-480 µg/mL) were mixed with 100 µL of freshly prepared DPPH solution (0.3 mM in ethanol). Then, the plate was kept in the dark at room temperature for 30 min. The reduction in the DPPH radical was measured by monitoring the decrease of absorption at 517 nm using a microplate reader (SpectraMax, Sunnyvale, CA, USA). Ascorbic acid was used as the standard compound (i.e., positive control). The DPPH radical scavenging capacity was measured using the following equation: where Asample is the absorbance of the tested sample with DPPH; Ablank, the absorbance of the test tube without adding the DPPH, and Acontrol is the absorbance of the DPPH solution.
Fe 3+ Reducing Power of R. echinus Essential Oil
The Fe 3+ reducing property of the essential oil was determined using a modified method of Kamdem et al. [59]. A reaction mixture containing saline solution (58 µL, 0.9%, w/v), Tris-HCl (45 µL, 0.1 M, pH, 7.5), the oil (27 µL, 1-480 µg/mL), and 110 µM FeCl 3 (36 µL) was incubated for 10 min at 37 • C. Subsequently, 1,10-phenanthroline (34 µL, 0.25%, w/v) was added and the absorbance of the orange complex formed was measured at 10, 20, 30, 50, 100, and 200 min at 510 nm (against blank solutions of the samples) using the microplate reader SpectraMax (Molecular Devices, Orleans Drive Sunnyvale CA, USA). The same procedure was performed for the control (i.e., Fe 3+ ), but without the oil. The absorbance was also determined 20, 70, and 170 min after adding ascorbic acid. This was necessary because the components of the mixture after long periods may oxidize Fe 2+ to Fe 3+ , leading to a decrease in absorbance that is not related to the reduction of Fe 3+ to Fe 2+ .
Anti-Epimastigote Assay of Trypanosoma cruzi
The assays were performed according to the procedures described by Vega et al. [60], with crops that have not reached the stationary phase. Epimastigote forms were seeded at 1 × 10 5 per mL in 200 µL, in 96-well microdilution plates, which were incubated at 28 • C for 72 h. Then, 50 µL of CPRG solution was added to give a final concentration of 200 µM. The plates were incubated at 37 • C for an additional 6 h. The absorbance reading was performed in a spectrophotometer at 595 nm. Nifurtimox was used as reference drug. The concentrations were tested in triplicate. Each experiment was performed twice, separately. The inhibition percentage (% AE) was calculated as follows: where AE = absorbance of experimental group; AEB = white of compounds; AC = control group of absorbance; CBA = white of culture environment. The essential oil was previously dissolved in DMSO. The concentration of dimethyl sulfoxide (DMSO) used to enable oil solubility was not greater than 0.01%.
Anti-Promastigote Assay of Leishmania brasiliensis
The assays were performed according to the procedures described by Mikus and Steverding [61], with some adjustments. The activity of the oil was performed in triplicate. Promastigote forms (2.5 × 10 5 parasites/well) were cultured in 96-well plastic plates. The samples were dissolved in dimethylsulfoxide (DMSO). Different dilutions of the compounds up to 200 mL of the final volume were added. After 48 h at 26 • C, 20 µL of resazurin solution was added and the oxidation reduction was measured at 570 to 595 nm. In each assay, pentamidine was used as the control reference drug. The anti-promastigotes percentages (AP%) were calculated.
Cytotoxicity Assay
A colorimetric assay with resazurin was used to quantify the cell viability, according to Rolón et al. [62]. NCTC 929 fibroblasts were seeded (5 × 10 4 cells/well) in flat-bottom 96-well microdilution plates of 100 µL, RPMI 1640 medium, for 24 h at 37 • C in 5% CO 2 , for the cells to adhere to the plates. The medium was replaced by different concentrations of drugs in 200 µL of medium and incubated for another 24 h. Growth controls were included. Then, a volume of 20 µL of 2 mM of resazurin solution was added and the plates were placed in the incubator for another 3 h to assess cell viability. The reduction of resazurin was determined by measuring wavelength absorbance at 490 and 595 nm. During the tests, controls with medium and drugs were used. Each concentration was tested three times. The cytotoxicity of each compound was estimated by calculating the percentage of cytotoxicity (% C).
Statistical Analysis
All assays were performed in triplicate. The results were expressed as the parasite growth inhibitory concentration (IC 50 ) and mean ± standard deviation (sd). The analysis was performed using GraphPad Prism software (version 6.0). Values were expressed as mean ± standard error of the mean (SEM). Two-way ANOVA followed by Dunnett's multiple comparison test were used when appropriate to assess differences between groups and control. The results are considered statistically significant at p < 0.05. The IC 50 values were estimated through non-linear regression.
Conclusions
The most abundant compounds identified in the essential oil of R. echinus, via a correlation network analysis, seem to be closely correlated. The essential oil of R. echinus showed antiparasitic activity against L. brasiliensis and no activity against T. cruzi. The concentration with the best effect had low cytotoxicity and a high antioxidant capacity, demonstrated by the reducing agent, requiring complementary and additional research to allow clinical use. The species can be an important source in the search for new and selective agents for the treatment of tropical diseases caused by protozoa of the genus Leishmania. This study, therefore, demonstrates the potential use of the essential oil of R. echinus as a source of new agents for the treatment of leishmaniasis. | 6,525.4 | 2022-03-27T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Power Reduction with Sleep/Wake on Redundant Data (SWORD) in a Wireless Sensor Network for Energy-Efficient Precision Agriculture
The use of wireless sensor networks (WSNs) in modern precision agriculture to monitor climate conditions and to provide agriculturalists with a considerable amount of useful information is currently being widely considered. However, WSNs exhibit several limitations when deployed in real-world applications. One of the challenges faced by WSNs is prolonging the life of sensor nodes. This challenge is the primary motivation for this work, in which we aim to further minimize the energy consumption of a wireless agriculture system (WAS), which includes air temperature, air humidity, and soil moisture. Two power reduction schemes are proposed to decrease the power consumption of the sensor and router nodes. First, a sleep/wake scheme based on duty cycling is presented. Second, the sleep/wake scheme is merged with redundant data about soil moisture, thereby resulting in a new algorithm called sleep/wake on redundant data (SWORD). SWORD can minimize the power consumption and data communication of the sensor node. A 12 V/5 W solar cell is embedded into the WAS to sustain its operation. Results show that the power consumption of the sensor and router nodes is minimized and power savings are improved by the sleep/wake scheme. The power consumption of the sensor and router nodes is improved by 99.48% relative to that in traditional operation when the SWORD algorithm is applied. In addition, data communication in the SWORD algorithm is minimized by 86.45% relative to that in the sleep/wake scheme. The comparison results indicate that the proposed algorithms outperform power reduction techniques proposed in other studies. The average current consumptions of the sensor nodes in the sleep/wake scheme and the SWORD algorithm are 0.731 mA and 0.1 mA, respectively.
Introduction
Precision agriculture (PA) is a supervision procedure that uses information technology to improve crop production and quality. The use of wireless sensor networks (WSNs) in agriculture to monitor climate conditions and to provide farmers with a considerable amount of information has been (i) The power consumption of the sensor and router nodes of a wireless agriculture system (WAS) is modeled. (ii) The power consumption and data communication of the adopted WAS are minimized, and battery life is prolonged using two power reduction techniques, namely, the sleep/wake scheme and the SWORD algorithm. Considerable power saving is achieved by the WAS when the proposed SWORD algorithm is used. (iii) Our results are compared with those of similar studies in terms of power consumption to verify the performance and efficiency of the proposed sleep/wake scheme and SWORD algorithm.
Related Studies
Researchers have developed several power reduction techniques in their search for an infinite power source for sensor nodes to achieve a limitless life span. This section presents several WSN power reduction methods that can be used in PA. Early studies on energy-efficient PA can be traced back to Zhu et al. [6], who developed a WSN based on an agricultural monitoring system. They discovered that the effective communication distance between nodes is more than 200 m in an open-field environment and the average packet loss rate is 7.6%. These authors adopted a sleep/wake algorithm to reduce the power consumption of the WSN. The received power became attenuated and sinusoidal when the distance between a transmitter and a receiver was increased. The sensor node woke up for 30 s every 4.5 min. The power consumption of the sensor node relative to that in traditional operation (i.e., 80 mA) was 53 mA. A power saving of 33.75% was achieved. Zou et al. [25] proposed methods to optimize data transmission and to extend network life by using a prediction algorithm for the energy harvested from a solar cell via environmental shadow detection. The mechanisms sustained network activities in an uninterrupted and efficient manner in the experimental study. However, a solar cell system is generally irregular and extensively influenced by weather changes. The power consumption of a sensor node in conventional operation is 80 mA, whereas power consumption is minimized based on the duty cycle (DC), which is set to a fixed value. The peak current consumption of the sensor node is 4.5, 18, and 20 mA, which corresponds to 10% DC in empty mode, 30% DC in lacking mode, and 100% DC in sufficient mode, respectively. Srbinovska et al. [20] collected environmental parameter data from distributed WSNs. They adopted a sleep/wake strategy to reduce the power consumption of the sensor node in a WSN. The current consumption of the sensor node is minimized to 142 µA relative to that in traditional operation, which consumes 24 mA, by switching between sleep and active modes in low DC.
Nguyen et al. [26] observed the impacts of climate change on crop fields by using WSNs. The authors adopted a sleep/wake algorithm to reduce the power consumption of WSNs. The data collection method improved the power consumption of the network. The advantages are low cost and ubiquitous monitoring. Moreover, the system can be widely applied to agriculture in developing countries. The Zigbee wireless protocol was configured to operate for 30 s every 15 min (i.e., DC equal to 3.3%). Therefore, the 129 mA power consumption of the sensor node in traditional operation is reduced to 17.25 mA when the DC strategy is adopted. Eto et al. [27] charged batteries via solar energy generation to solve the problem of covering an agricultural field with mobile sensor nodes. The results exhibited a 4% reduction in the number of nodes and a 10% extension of operation lifetime compared with the conventional method. The selection of the leader node was performed by calculating residual energy. In this case, the power consumption of the node can be extended to 90 working days (i.e., consumes 11.11 mA at a battery capacity of 1000 mAh). Fourie et al. [28] designed a fish pond management system for fish conservation by using an autonomous solar-powered system. Zigbee can enter into sleep mode to improve the power consumption of WSNs. The results indicated that the system can successfully control the pond's temperature and dissolved oxygen level. The use of a maximum power point tracking (MPPT) charging controller allows the platform to utilize energy higher than 8 W. However, the components of the system, including a solenoid valve, a stepper motor, and a sensor node hardware, consume a high amount of power, i.e., approximately 4 W. Meanwhile, the sensor node components (i.e., sensor, processor, and RF module) exhibit a low power consumption of approximately 207 mW relative to the total consumption. Therefore, the current consumption of the sensor node is 81.5 mA. When a solar cell with DC charging is adopted, the power efficiency is improved by 31%. However, power efficiency increases to 98% when MPPT is used.
Bapat et al. [29] developed a WSN application for crop protection from animal intrusion in a farming field. A sleep/wake scheme was adopted to conserve the power of the WSN. Successful results were obtained from laboratory-level trials of the systems. However, the authors discovered a few technical faults in the circuit components. The power consumption of the system was 27 W/h, but the calculations of the current consumption of the nodes in the WSN were not considered in their study. Villarrubia et al. [30] constructed practical organizations of agents that can connect with one another while observing crop irrigation. Fuzzy logic was adopted to accurately control watering quantities. The major finding of this research was that heterogeneous data from the surrounding can be merged via sensor node measurements. The proposed system based on fuzzy logic consumes 4.5 L of water per day relative to traditional operation, which consumes 7.3 L a day. Thus, 37% water saving is achieved over 30 days. The use of a solar cell also allows the continuous charging of the sensor node battery. Navarro-Hellín et al. [31] monitored soil water status and irrigation water by developing a practical application to optimize water resources in irrigated agriculture. They adopted a sleep/wake scheme to reduce the power consumption of the General Packet Radio Service (GPRS) node. The sending and sampling rates in the achieved tests were adjusted to 30 min and 15 min, respectively. The disadvantage of this system is its short life span of 13.35 days. However, the average current consumption of the entire GPRS node is 5.93 mA relative to that in traditional operation, where the GPRS modem alone consumes 400 mA during data transmission.
De la Concepción et al. [24] presented an efficient WSN platform that is appropriate for agricultural applications and can be used in remote areas. A sleep/wake algorithm was proposed to limit the energy consumption of the WSN platform. The results showed that the nodes are autonomous, scalable, and easy to locate and relocate. However, the proposed system has a limited communication range because the transceiver uses an omnidirectional antenna. Nevertheless, the sensor node captures and transmits an image for 6 min and then sleeps for 54 min, with a low power consumption of 0.8 µA. Subsequently, sensor node life is extended to 8 days. Moreover, current consumption is improved to 4.427 mA compared with that in conventional operation, where the current consumption of the camera and CC1110 is 270 mA and 31 mA, respectively, in the transmission mode. Cambra et al. [32] discovered, examined, and auto-calibrated imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The results indicated that the design of the system improved water quality in hydroponic agriculture by implementing and testing a smart system for bicarbonate control in irrigation. The advantages include low power consumption, low cost due to the use of a low-power wireless protocol (i.e., nRF24L01) while ensuring the adopted coverage area, and the provision of multimedia services because of the adoption of mobile devices or computers. However, the number of nodes in the network is limited to five. Each node wakes up for 15 s to transmit their data and then returns to sleep for 255 s (i.e., DC is equal to 15/270 = 5.555%). Accordingly, the power consumption of the sensor node is reduced from 49 mA (in traditional operation) to 2.807 mA (with the sleep/wake strategy).
Ilie-Ablachim et al. [33] monitored environmental parameters for greenhouse applications. They used the MoniSen module to introduce a new granularity level for PA monitoring applications. They adopted a distributed sensor architecture for large-scale applications by developing fully functional software and hardware. The sleep/wake algorithm was applied to reduce power consumption. Moreover, the sensor node in the system was configured to transmit data for 48 s every 24 h (i.e., DC is equal to 3.3%). Consequently, battery life is prolonged to 4408 h. The total current consumption of the system is 0.544 mA relative to that in traditional operation, where the sensor node consumes 46.145 mA with a power efficiency of more than 90%. Zaier et al. [34] tested and developed a smart irrigation system based on WSN and solenoid electrovalves. The smart irrigation system was implemented in 14 farms. The power consumption of the WSN was improved by the proposed sleep/wake strategy by using periodic sleeping cycles. The power consumption of the system is extremely low in sleep mode. The power consumption of the XBee Pro S2 module of the sensor node is 177 mA in active mode, 3.5 µA in sleep mode, and 7 mA in active mode for the soil moisture sensor. Power consumption is improved and battery life is extended through the power-down mode of XBee.
All the aforementioned studies are summarized in Table 1. The table highlights the improved power consumption and the related power reduction technique or scheme in each work. The table also provides the hardware, including wireless protocol, microcontroller or processor, climate condition sensors, battery type and capacity, and power and voltage of the solar panel, adopted in each study. The future of the agriculture industry can be revolutionized by relying on computerized systems, advanced sensors, and energy-efficient wireless networks instead of using the traditional agriculture system that is proven to be inefficient, labor intensive, and has low productivity.
WSN Topology
In this work, we aim to reduce unnecessary energy consumption and to prolong the battery life of the network, along with the sensor and router nodes that are distributed across a farm field. Figure 1 shows the proposed topology of the wireless nodes in the farm field with a layout of 200 × 200 m 2 , which represents the available tested area in this study. This area can be potentially extended to actual medium-size commercial crop farm areas. The farm field consists of 16 sensor nodes, four router nodes, the main router node, and the coordinator node. Each sensor node is responsible for collecting climate conditions, such as air temperature, air humidity, and soil moisture, from a square area of 50 × 50 m 2 . The sensor node is located at the center of the square area. Four sensor nodes communicate wirelessly to one of the router nodes. The collected data (i.e., climate conditions) from the sensor nodes are transmitted wirelessly to the central router via the router nodes. The main router node passes the climate conditions to the coordinator node, which is located at the base station (farmhouse). Our proposed topology considers only fixed router nodes for communication between sensor nodes and the coordinator node. In addition, the changing roles of the router nodes are not considered, as shown in Figure 1, because the focus of this study is the physical layer approach for achieving energy efficiency, whereas changing roles involve higher layers, such as media access control (MAC) and network. Thus, the changing roles can provide a future opportunity to continue the current study to further understand the benefits of energy savings based on cross-layer dimensions.
The distance between the farmhouse and the crop field is approximately 200 m. Therefore, an appropriate wireless communication protocol, such as Zigbee (XBee S2C), must be used to ensure data delivery. This module can theoretically guarantee 1.2 km in outdoor environments [38], low power consumption, low cost, and battery operation in WSNs. In terms of power consumption, Zigbee (XBee S2C), which consumes 36.9 mW as reported in [13], is better than LoRa, SigFox, WiFi, and GPRS, which consume 100, 122, 835, and 560 mW, respectively. Nevertheless, the current work can be extended to cater to future wireless technologies, including LoRa.
Hardware Configuration of the Proposed WAS
The range covered by the individual node and the total area of the crop field can be set based on the number of nodes to be deployed in the field. Table 2 presents the hardware for the sensor node of the proposed WAS. Figure 2 shows the hardware of the sensor, router, and coordinator nodes. It presents a simple hardware implementation of the WAS, which involves minimal interface and connection between sensors to the microcontroller and the wireless links. The digital humiditytemperature (DHT11) sensor and the soil moisture sensor require only three wires for a physical interface with the microcontrollers: (i) supply voltage, (ii) ground, and (iii) output data. The first two wires are used to power the sensors by +5 V, whereas the third wire is the analog and digital serial output signals for the soil moisture sensor and DHT11, respectively. The microcontroller of the sensor and router nodes communicates with the Zigbee wireless protocol through a single-wire data bus.
The sensor and router nodes are fixed on a surface that is 1.5 m above the ground (Figure 3), which is the recommended height in [39], to avoid the effect of the Fresnel zone or signal reflection. The sensor and router nodes in the tested area are powered by a battery. The coordinator node is connected to a laptop/PC in the base station. Therefore, this node is unaware of the energy supply. The PC is equipped with graphical user interface (GUI) software to monitor the climate conditions in the farm field. The sensor and router node structures are designed to work day and night. At daytime, the components, sensors, microcontroller, and XBee S2C are powered by solar energy. Furthermore, a solar cell provides energy to charge the batteries of the sensor and router nodes. The solar panel is positioned in front of the sensor node box at a low inclination angle (20°-30°) to orient the solar cell panel relative to the sun. In addition, two power reduction algorithms (i.e., sleep/wake and SWORD) are adopted to further reduce the power consumption of the sensor and router nodes. At nighttime, the components only use battery energy as supported by the two power reduction algorithms. For WSN simplicity and to reduce the complexity of the proposed WAS, one sensor node, one router The main router node passes the climate conditions to the coordinator node, which is located at the base station (farmhouse). Our proposed topology considers only fixed router nodes for communication between sensor nodes and the coordinator node. In addition, the changing roles of the router nodes are not considered, as shown in Figure 1, because the focus of this study is the physical layer approach for achieving energy efficiency, whereas changing roles involve higher layers, such as media access control (MAC) and network. Thus, the changing roles can provide a future opportunity to continue the current study to further understand the benefits of energy savings based on cross-layer dimensions.
The distance between the farmhouse and the crop field is approximately 200 m. Therefore, an appropriate wireless communication protocol, such as Zigbee (XBee S2C), must be used to ensure data delivery. This module can theoretically guarantee 1.2 km in outdoor environments [38], low power consumption, low cost, and battery operation in WSNs. In terms of power consumption, Zigbee (XBee S2C), which consumes 36.9 mW as reported in [13], is better than LoRa, SigFox, WiFi, and GPRS, which consume 100, 122, 835, and 560 mW, respectively. Nevertheless, the current work can be extended to cater to future wireless technologies, including LoRa.
Hardware Configuration of the Proposed WAS
The range covered by the individual node and the total area of the crop field can be set based on the number of nodes to be deployed in the field. Table 2 presents the hardware for the sensor node of the proposed WAS. Figure 2 shows the hardware of the sensor, router, and coordinator nodes. It presents a simple hardware implementation of the WAS, which involves minimal interface and connection between sensors to the microcontroller and the wireless links. The digital humidity-temperature (DHT11) sensor and the soil moisture sensor require only three wires for a physical interface with the microcontrollers: (i) supply voltage, (ii) ground, and (iii) output data. The first two wires are used to power the sensors by +5 V, whereas the third wire is the analog and digital serial output signals for the soil moisture sensor and DHT11, respectively. The microcontroller of the sensor and router nodes communicates with the Zigbee wireless protocol through a single-wire data bus.
The sensor and router nodes are fixed on a surface that is 1.5 m above the ground (Figure 3), which is the recommended height in [39], to avoid the effect of the Fresnel zone or signal reflection. The sensor and router nodes in the tested area are powered by a battery. The coordinator node is connected to a laptop/PC in the base station. Therefore, this node is unaware of the energy supply. The PC is equipped with graphical user interface (GUI) software to monitor the climate conditions in the farm field. The sensor and router node structures are designed to work day and night. At daytime, the components, sensors, microcontroller, and XBee S2C are powered by solar energy. Furthermore, a solar cell provides energy to charge the batteries of the sensor and router nodes. The solar panel is positioned in front of the sensor node box at a low inclination angle (20 • -30 • ) to orient the solar cell panel relative to the sun. In addition, two power reduction algorithms (i.e., sleep/wake and SWORD) are adopted to further reduce the power consumption of the sensor and router nodes. At nighttime, the components only use battery energy as supported by the two power reduction algorithms. For WSN simplicity and to reduce the complexity of the proposed WAS, one sensor node, one router node, the main router node, and the coordinator node are practically implemented to monitor the climate conditions in the farm field. node, the main router node, and the coordinator node are practically implemented to monitor the climate conditions in the farm field.
Zigbee Data Packet Length
The transmitted data packets for the sensor, router, and main router must be determined to node, the main router node, and the coordinator node are practically implemented to monitor the climate conditions in the farm field.
Zigbee Data Packet Length
The transmitted data packets for the sensor, router, and main router must be determined to identify the time consumption of the XBee S2C wireless module. The active transmission time (tTX) of
Zigbee Data Packet Length
The transmitted data packets for the sensor, router, and main router must be determined to identify the time consumption of the XBee S2C wireless module. The active transmission time (t TX ) of XBee S2C, which is based on the Zigbee wireless protocol, can be expressed as [42]: where t SA is the transient time of XBee S2C from sleep mode to active mode. XBee S2C consumes 10.2 ms when pin sleep is used [43]. L denotes the data packet length of XBee S2C in bits, and D indicates the XBee S2C data speed of 250 kbps for the XBee S2C module using the 2.4 GHz industrial, scientific, and medical frequency band. The data packet length consists of 31 bytes (overhead) and 35, 47, and 95 bytes (payload) for the sensor, router, and main router nodes (shown in Figure 4a-c, respectively). In the current work, the overhead bytes are constant. Moreover, the data bytes differ for the sensor, router, and main router nodes, which rely on the parameters of climate conditions. The data packet length and the active transmission time of the adopted nodes can be described as follows:
1.
Sensor node: The data packet length of each sensor node consists of 35 bytes (i.e., 280 bits). The payload includes 4 bytes, namely, (i) identification (ID) of the sensor node, (ii) air temperature data, (iii) air humidity data, and (iv) soil moisture data ( Figure 4a). Therefore, the active transmission time for each XBee S2C in the sensor node based on Equation (1) is 11.32 ms.
2.
Router node: The data packet length of each router node consists of 47 bytes (i.e., 376 bits). The payload includes 16 bytes, i.e., 4 bytes for each sensor node, where each router node collects data from four sensor nodes ( Figure 4b). Therefore, the active transmission time for each XBee S2C in the router node based on Equation (1) is 11.704 ms.
3.
Main router node: The data packet length of the main router node consists of 95 bytes (i.e., 760 bits). The payload includes 64 bytes, i.e., 16 bytes for each router node, where the main router node collects data from four router nodes ( Figure 4c). Therefore, the active transmission time for XBee S2C in the main router node based on Equation (1) The data packet length consists of 31 bytes (overhead) and 35, 47, and 95 bytes (payload) for the sensor, router, and main router nodes (shown in Figures 4a-c, respectively). In the current work, the overhead bytes are constant. Moreover, the data bytes differ for the sensor, router, and main router nodes, which rely on the parameters of climate conditions. The data packet length and the active transmission time of the adopted nodes can be described as follows: 1. Sensor node: The data packet length of each sensor node consists of 35 bytes (i.e., 280 bits). The payload includes 4 bytes, namely, (i) identification (ID) of the sensor node, (ii) air temperature data, (iii) air humidity data, and (iv) soil moisture data ( Figure 4a). Therefore, the active transmission time for each XBee S2C in the sensor node based on Equation (1) is 11.32 ms. 2. Router node: The data packet length of each router node consists of 47 bytes (i.e., 376 bits). The payload includes 16 bytes, i.e., 4 bytes for each sensor node, where each router node collects data from four sensor nodes ( Figure 4b). Therefore, the active transmission time for each XBee S2C in the router node based on Equation (1) is 11.704 ms. 3. Main router node: The data packet length of the main router node consists of 95 bytes (i.e., 760 bits). The payload includes 64 bytes, i.e., 16 bytes for each router node, where the main router node collects data from four router nodes ( Figure 4c). Therefore, the active transmission time for XBee S2C in the main router node based on Equation (1) The data packet length of XBee S2C includes a maximum of 127 bytes (31 bytes overhead and 96 bytes payload) [45]. Therefore, the adopted Zigbee wireless protocol using XBee S2C is adequate to achieve the proposed WSN topology in the current work, as presented in Figure 1. The maximum data packet length of 95 bytes is used to monitor the climate conditions of the agricultural field.
Delay in WSNs denotes the difference between the time when the information is produced in The data packet length of XBee S2C includes a maximum of 127 bytes (31 bytes overhead and 96 bytes payload) [45]. Therefore, the adopted Zigbee wireless protocol using XBee S2C is adequate to achieve the proposed WSN topology in the current work, as presented in Figure 1. The maximum data packet length of 95 bytes is used to monitor the climate conditions of the agricultural field.
Delay in WSNs denotes the difference between the time when the information is produced in the sensor node and the time when the information arrives at the sink node. Being delay-sensitive poses a challenge to WSNs, particularly with multihop counts [46]. Most information is delay-tolerant in the majority of agriculture applications [47,48]. In addition, a small proportion of delay-sensitive information exists. Being delay-sensitive is essential when multihops are used in WSNs and the data is required to be monitored in real time. In our application, however, climate conditions (i.e., temperature, humidity, and soil moisture) are delay-insensitive because the data are collected and transmitted from the sensor node to the coordinator node through a single router node. Evidently, delay is not a major concern in such PA case. Therefore, delay is not considered in the current study.
SWORD Algorithm
The SWORD algorithm ( Figure 5) is designed and implemented in the microcontroller Atmega 328p of the sensor node to reduce power consumption. The SWORD algorithm proceeds as follows: 1.
The microcontroller Atmega 328p initially wakes up from sleep mode.
2.
All the components of the sensor node (i.e., sensors, microcontroller, and XBee S2C) are supplied with energy from a solar cell (12 V/5 W) at daytime to charge their batteries. The batteries are used to supply power to the sensor node at nighttime.
3.
The microcontroller measures the climate conditions (i.e., air temperature, air humidity, and soil moisture).
4.
The microcontroller measures the difference between the previous and subsequent soil moisture values (soil moisture difference = |previous value − subsequent value|) to check whether redundant data exist.
5.
When the difference between the two values is zero or less than or equal to 5% (threshold level), the sensors and the XBee S2C module enter sleep mode. Furthermore, the microcontroller goes to power-saving mode to save the energy of the sensor node. In such case, the soil is wet and no data are transmitted from the sensor node to the router node. In addition, irrigating the soil is unnecessary, thereby leading to water saving. The components of the sensor node remain in sleep mode until the threshold level is exceeded. A slight difference in value of 5% is selected to obtain a precise decision. In agriculture, irrigation systems depend on soil moisture measurements, wherein soil is considered a crucial part of planning tools for conducting irrigation [49,50]. When soil is dry, the irrigation system is operating; otherwise, the irrigation system is off (i.e., wet soil). Therefore, the SWORD algorithm is determined on the basis of soil moisture measurement in the current study. By using soil moisture, irrigation is scheduled to sustain soil moisture conditions equivalent or close to the field capacity to satisfy the required crop water requirements. In addition, several sensors are being considered for future work to capture relevant parameters related to agriculture, such as soil temperature, soil conductivity, salinity, leaf wetness, and rainfall sensors. 6.
By contrast, all the components of the sensor node remain awake when the difference between the two values is greater than 5%. Therefore, the measured data in Step 3 are transmitted from the sensor node to the coordinator node via the router nodes. After the transmission process is completed, the sensors and XBee S2C module enter sleep mode. Furthermore, the microcontroller goes to power-saving mode to save the energy of the sensor node. The sensor node transmits the measured data about climate conditions every 15 min (900 s) for 2 s (i.e., extremely low DC 2/900 = 2.222 × 10 −3 ). The sensors require 1 s to measure the data on climate conditions. In addition, another second is required to transmit the data to the related router node, including the active transmission time of XBee S2C (i.e., 11.32 ms, as shown in the previous section) and the microcontroller. This situation indicates that 2 s are consumed by each sensor node to measure and transmit climate conditions to the related router node, as shown in the timing diagram in Figure 6. Consequently, each sensor node wakes up for 2 s and sleeps for 898 s. 7.
The four sensor nodes communicate using one router node ( Figure 1). Therefore, the allocated time for each router node is 16 s. From the 16 s, 8 s is for the four sensor nodes, whereas 8 s is for the guard time (2 s after data transmission of each sensor node to avoid data collision), as shown in Figure 6. The router nodes, such as RN1, RN2, RN3, and RN4, collect and transmit the data of the four sensor nodes to the main router node within 16 s and then enter sleep mode. Consequently, each router node wakes up for 16 s and sleeps for 884 s (i.e., extremely low DC 1.778 × 10 −2 ), as shown in Figure 6. 8.
The main router node gathers the collected data of the four router nodes and transmits these data to the coordinator node within 64 s. The main router node then enters sleep mode for 836 s after data transmission (i.e., DC is 7.111 × 10 −2 ), as shown in Figure 6. 8. The main router node gathers the collected data of the four router nodes and transmits these data to the coordinator node within 64 s. The main router node then enters sleep mode for 836 s after data transmission (i.e., DC is 7.111 × 10 −2 ), as shown in Figure 6. Total time= 900 s Figure 6. Timing diagram of the sleep/wake scheme of the sensor and router nodes.
Sensor Node Power Consumption Model
The life span of sensor nodes is the time that lapsed from the first transmission until the sensor nodes lose their sensing capability. The life span of a WSN relies on the current consumed by each node in the network. The power consumption of sensor nodes depends on the number of components. In this study, the sensor nodes include air temperature and humidity sensors embedded into the DHT11 sensor, a soil moisture sensor, the Atmega 328p microcontroller as a standalone system for reducing power consumption, and the XBee S2C wireless protocol. Among the sensor nodes, XBee S2C is considered the main power consumer [51]. When the consumption values of these components are added, the total current consumption of the sensor node can be expressed as Equation (2): where ISoil is the current consumed by the soil moisture sensor, IDHT is the current consumed by the air temperature and humidity sensors, IXBee S2C is the current consumption of the XBee S2C wireless technology, and IAtmega is the current consumed by the Atmega 328p microcontroller. The measurements of each sensor node component were achieved by using a storage oscilloscope (MCP Lab Electronics/DQ7042C) and a digital multimeter (MCP Lab Electronics/MT8045) to evaluate the current consumption of the sensor node. Therefore, the current drawn by the sensor node is presented for two cases. The first case, which is a conventional operation (i.e., without a sleep/wake scheme or any power reduction technique), is formulated in Equation (2). The second case, which involves a sleep/wake scheme, can be expressed in terms of average current consumption, as presented in Equations (3)-(7): where Iavg is the average current consumption of each sensor node component. DC is the proposed duty cycle of the sensor node, which provides an effective method for achieving energy efficiency and can be computed using the ratio of the active time to the total time (tactive/Ttotal). DC is configured to 2.222 × 10 −3 to reduce the power consumption of the sensor node, where the active time is 2 s, the
Sensor Node Power Consumption Model
The life span of sensor nodes is the time that lapsed from the first transmission until the sensor nodes lose their sensing capability. The life span of a WSN relies on the current consumed by each node in the network. The power consumption of sensor nodes depends on the number of components. In this study, the sensor nodes include air temperature and humidity sensors embedded into the DHT11 sensor, a soil moisture sensor, the Atmega 328p microcontroller as a standalone system for reducing power consumption, and the XBee S2C wireless protocol. Among the sensor nodes, XBee S2C is considered the main power consumer [51]. When the consumption values of these components are added, the total current consumption of the sensor node can be expressed as Equation (2): where I Soil is the current consumed by the soil moisture sensor, I DHT is the current consumed by the air temperature and humidity sensors, I XBee S2C is the current consumption of the XBee S2C wireless technology, and I Atmega is the current consumed by the Atmega 328p microcontroller. The measurements of each sensor node component were achieved by using a storage oscilloscope (MCP Lab Electronics/DQ7042C) and a digital multimeter (MCP Lab Electronics/MT8045) to evaluate the current consumption of the sensor node. Therefore, the current drawn by the sensor node is presented for two cases. The first case, which is a conventional operation (i.e., without a sleep/wake scheme or any power reduction technique), is formulated in Equation (2). The second case, which involves a sleep/wake scheme, can be expressed in terms of average current consumption, as presented in Equations (3)-(7): where I avg is the average current consumption of each sensor node component. DC is the proposed duty cycle of the sensor node, which provides an effective method for achieving energy efficiency and can be computed using the ratio of the active time to the total time (t active /T total ). DC is configured to 2.222 × 10 −3 to reduce the power consumption of the sensor node, where the active time is 2 s, the sleep time is 898 s, and the total time is 15 min (900 s). I active and I sleep are the active and sleep current consumption of each component of the sensor node. The overall power consumption of the sensor nodes (I SN_total ) without t sleep/wake scheme can be expressed as Equation (8); where I SN n is the current consumption of each sensor node. In addition, n is the number of sensor nodes, and n = 16 in the current work. Given that DC is equal in each sensor node, the total current consumption (I SN_avg_total ) of the sensor nodes can be calculated based on Equation (9): where I SN n _avg is the average current consumption of each sensor node that is modeled in Equation (7). The average power consumption (P avg_SN ) of a sensor node can be calculated by multiplying the average current consumption with the supply voltage of the sensor node (i.e., 2 × 3.7 V = 7.4 V), as shown in Equation (10). The sensor node life span (L life ) based on the sleep/wake scheme can be expressed as Equation (11): where C Battery is the initial battery capacity of the sensor node in mAh. In the current work, two Li-ion rechargeable batteries (7.4 V/2600 mAh) are used to supply each sensor node with power.
Router Node Power Consumption Model
In this work, each router node consists of an Atmega 328p microcontroller and the XBee S2C wireless protocol. Therefore, the power consumption of the router and main router nodes depends only on these two components. The power consumption of the router and the main router nodes without and with the sleep/wake scheme can be expressed as Equations (12) and (13), respectively: The total current consumption of the router nodes without and with the sleep/wake scheme can be expressed as Equations (14) and (15), respectively: where I RN m is the current consumption of each router node, as presented in Equation (12); I RN m _average is the average current consumption of each router node, as modeled in Equation (13); and m is the number of router nodes, where m = 4 in the current work. Equations (10) and (11) can be utilized to calculate the average power consumption and life span of the router and main router nodes based on the sleep/wake scheme, respectively.
Energy Harvesting Techniques
Energy scavenging or harvesting involves a massive amount of energy from different environments, such as thermal, solar, vibration, and wind. Energy harvesting approaches are effective in improving network life span [52]. Among various forms of environmental energy, solar cell energy is selected in the current work to supply power to the sensor, router, and main router nodes. In energy harvesting, a battery is cyclically recharged to preserve the life span of nodes that continuously operate in the network rather than focusing on reducing energy depletion. However, an energy-harvesting platform must be incorporated into the network to efficiently use the harvested energy.
The current consumption of the sensor nodes from the battery depends on the type of application [53]. In our work, the solar panel is positioned in front of the sensor node box. The solar cell angle is oriented toward the sun with an incident angle of 20 • -30 • relative to the ground [8]. KINGRO-004V solar cell (KINGRO, Shaoxing, China), which is a first-generation polycrystalline solar cell (12 V/5 W) that delivers a maximum current of 416 mA, is selected (Figure 3). Table 3 provides the characteristics of the adopted solar cell.
Mathematical models should be created for the battery and solar cells of the sensor node to investigate the harvested energy from the solar cell and battery consumption. The power consumption model of the sensor, router, and main router nodes based on the battery is presented in Equations (2)-(15), as shown in Section 7.1 and 7.2. The solar cell is formulated in the current section. Solar cell efficiency (η) can be expressed as [25]: where P max is the output power of the solar cell (measured in W); S is the solar cell surface area (measured in m 2 ); and R is the radiation, which can be defined as the intensity of the incident light power on the surface of a solar cell (measured in W/m 2 ). Figure 7 shows the connection of the soil moisture sensor to the microcontroller (connected to Pin ANo). The soil moisture sensor is connected to a 10 KΩ resistance as a voltage divider, as shown in Figure 7. The middle point (i.e., V out ) between the soil moisture sensor and the resistor is used for sensing variations in output voltage from the changes in moisture value. The output voltage (V out ) is altered between 5 and 0 depending on the moisture of soil. The moisture corresponds to the analog-to-digital converter of the Arduino microcontroller 0-1023 (10 bit resolution). The output voltage of the voltage divider can be translated into moisture in percentage through the microcontroller algorithm. The resistance of the soil moisture value decreases with increasing soil moisture. Therefore, the output voltage decreases. By contrast, the resistance of the soil moisture value increases when soil is dry, thereby increasing output voltage. The soil moisture sensor is calibrated in-field, as shown in Figure 8. The calibration of the soil moisture sensor (Figure 8) is achieved through an experimental test of three cases: (i) wet soil, (ii) field capacity soil moisture, and (iii) dry soil for soil mixture (clay and sandy soil). This calibration can be used in all seasons. The figure shows that the output voltage values of the soil moisture sensor are plotted on the left y-axis, whereas the moisture values in percentage are plotted on the right y-axis with respect to the depth of the soil moisture sensor. However, the values of the moisture percentages that are presented in Figure 8 are used in the microcontroller algorithm to determine the threshold level between dry soil and wet soil. From these values, the soil moisture sensor achieves self-calibration before each new measure based on the stored data in the algorithm of the microcontroller to reduce measurement errors.
Calibration of Sensors
The temperature and humidity sensor (i.e., DHT11) was calibrated to be extremely precise by the manufacturer in a laboratory [54]. The calibration coefficients of the sensor are stored as a program in the one-time programmable (OTP) memory. When the OTP memory is programmed, the contents cannot be changed and are retained after power is turned off. This sensor contains a negative temperature coefficient component for temperature measurement and a resistive component for humidity measurement. DHT11 is a digital sensor that can be connected to an 8 bit microcontroller using a single wire serial interface. It provides fast response, and exhibits high-quality, cost-effective, and anti-interference ability. Consequently, the DHT11 sensor does not require a calibration process and can work accurately to provide air temperature and humidity readings in any seasons. temperature coefficient component for temperature measurement and a resistive component for humidity measurement. DHT11 is a digital sensor that can be connected to an 8 bit microcontroller using a single wire serial interface. It provides fast response, and exhibits high-quality, cost-effective, and anti-interference ability. Consequently, the DHT11 sensor does not require a calibration process and can work accurately to provide air temperature and humidity readings in any seasons.
Current Consumption Measurements
The power consumption of the WAS is a combination of several hardware components, namely, air temperature and humidity (DHT11) sensor, soil moisture sensor, a standalone Atmega 328p microcontroller, and the XBee S2C module. The router and main router consist only of the standalone Atmega 328p microcontroller and the XBee S2C module. The current consumption of the sensor, router, and main router nodes is measured on the basis of three cases, namely, (i) traditional operation (without any power reduction technique), (ii) sleep/wake scheme, and (iii) the SWORD algorithm, by using a digital multimeter (MCP Lab Electronics/MT8045) and a storage oscilloscope (MCP Lab Electronics/DQ7042C). The DHT11 and soil moisture sensors consume 1.85 mA and 0.1 mA in active mode, respectively, and 0.01 mA in idle mode (i.e., no measurement value) at a supply voltage of 3.3 V.
The Atmega 328p was practically implemented under standalone condition to reduce the power consumption of the battery of the network nodes. The selection of the oscillator frequency value is crucial in the power consumption of the microcontroller, as presented in [42], where the Atmega 328p microcontroller consumed 6.25 mA at 16 MHz. A trade-off between power consumption and processing speed is necessary. Therefore, an operating frequency of 16 MHz is selected in the current experiment to minimize measurement time. In such case, the microcontroller Atmega 328p consumes temperature coefficient component for temperature measurement and a resistive component for humidity measurement. DHT11 is a digital sensor that can be connected to an 8 bit microcontroller using a single wire serial interface. It provides fast response, and exhibits high-quality, cost-effective, and anti-interference ability. Consequently, the DHT11 sensor does not require a calibration process and can work accurately to provide air temperature and humidity readings in any seasons.
Current Consumption Measurements
The power consumption of the WAS is a combination of several hardware components, namely, air temperature and humidity (DHT11) sensor, soil moisture sensor, a standalone Atmega 328p microcontroller, and the XBee S2C module. The router and main router consist only of the standalone Atmega 328p microcontroller and the XBee S2C module. The current consumption of the sensor, router, and main router nodes is measured on the basis of three cases, namely, (i) traditional operation (without any power reduction technique), (ii) sleep/wake scheme, and (iii) the SWORD algorithm, by using a digital multimeter (MCP Lab Electronics/MT8045) and a storage oscilloscope (MCP Lab Electronics/DQ7042C). The DHT11 and soil moisture sensors consume 1.85 mA and 0.1 mA in active mode, respectively, and 0.01 mA in idle mode (i.e., no measurement value) at a supply voltage of 3.3 V.
The Atmega 328p was practically implemented under standalone condition to reduce the power consumption of the battery of the network nodes. The selection of the oscillator frequency value is crucial in the power consumption of the microcontroller, as presented in [42], where the Atmega 328p microcontroller consumed 6.25 mA at 16 MHz. A trade-off between power consumption and processing speed is necessary. Therefore, an operating frequency of 16 MHz is selected in the current experiment to minimize measurement time. In such case, the microcontroller Atmega 328p consumes 6.25 mA in active mode and 0.09 mA in power-saving mode [55].
Our test bed measurements show that an active current consumption of 6 mA (Figure 9) is
Current Consumption Measurements
The power consumption of the WAS is a combination of several hardware components, namely, air temperature and humidity (DHT11) sensor, soil moisture sensor, a standalone Atmega 328p microcontroller, and the XBee S2C module. The router and main router consist only of the standalone Atmega 328p microcontroller and the XBee S2C module. The current consumption of the sensor, router, and main router nodes is measured on the basis of three cases, namely, (i) traditional operation (without any power reduction technique), (ii) sleep/wake scheme, and (iii) the SWORD algorithm, by using a digital multimeter (MCP Lab Electronics/MT8045) and a storage oscilloscope (MCP Lab Electronics/DQ7042C). The DHT11 and soil moisture sensors consume 1.85 mA and 0.1 mA in active mode, respectively, and 0.01 mA in idle mode (i.e., no measurement value) at a supply voltage of 3.3 V.
The Atmega 328p was practically implemented under standalone condition to reduce the power consumption of the battery of the network nodes. The selection of the oscillator frequency value is crucial in the power consumption of the microcontroller, as presented in [42], where the Atmega 328p microcontroller consumed 6.25 mA at 16 MHz. A trade-off between power consumption and processing speed is necessary. Therefore, an operating frequency of 16 MHz is selected in the current experiment to minimize measurement time. In such case, the microcontroller Atmega 328p consumes 6.25 mA in active mode and 0.09 mA in power-saving mode [55].
Our test bed measurements show that an active current consumption of 6 mA (Figure 9) is recorded for the microcontroller (Figure 10a). The active current drain of the Atmega 328p microcontroller is measured using an oscilloscope, whereas the power-saving mode is measured using a digital multimeter because a low current consumption value cannot be captured by an oscilloscope. In the test bed measurement, a 10 Ω shunt resistor is connected between the supply pin of the microcontroller and the voltage source (battery: 3.3 V). A small shunt resistor value is selected to reduce voltage loss in the supply line of the microcontroller. The drained current is obtained in milliamperes by dividing the measured voltage across the shunt resistor by the shunt resistor value of 10 Ω (I = V/R). Therefore, the active current consumption of Atmega 328p during the transmission process is 60 mV/10 Ω = 6 mA in active mode, as shown in Figure 10a. oscilloscope. In the test bed measurement, a 10 Ω shunt resistor is connected between the supply pin of the microcontroller and the voltage source (battery: 3.3 V). A small shunt resistor value is selected to reduce voltage loss in the supply line of the microcontroller. The drained current is obtained in milliamperes by dividing the measured voltage across the shunt resistor by the shunt resistor value of 10 Ω (I = V/R). Therefore, the active current consumption of Atmega 328p during the transmission process is 60 mV/10 Ω = 6 mA in active mode, as shown in Figure 10a.
In the WSN hardware, the main power is consumed by the RF module during data transmission and reception [56]. Therefore, the same procedure as that in the microcontroller measurements of active current consumption for XBee S2C can be applied. An active current consumption of 11.4 mA is measured during the transmission process, as shown in Figure 10b. However, the sleep current consumption of XBee S2C is measured to be 0.58 mA using the digital multimeter. The parameters in Equations (2)- (11) with and without the sleep/wake scheme for the sensor node are presented in Table 4. Furthermore, the parameters in Equations (12)- (15) for the router and main router nodes are provided in Table 4. The values in Table 4 are measured for active time, sleep time, DC, and active and sleep current consumption with and without the sleep/wake scheme. oscilloscope. In the test bed measurement, a 10 Ω shunt resistor is connected between the supply pin of the microcontroller and the voltage source (battery: 3.3 V). A small shunt resistor value is selected to reduce voltage loss in the supply line of the microcontroller. The drained current is obtained in milliamperes by dividing the measured voltage across the shunt resistor by the shunt resistor value of 10 Ω (I = V/R). Therefore, the active current consumption of Atmega 328p during the transmission process is 60 mV/10 Ω = 6 mA in active mode, as shown in Figure 10a.
In the WSN hardware, the main power is consumed by the RF module during data transmission and reception [56]. Therefore, the same procedure as that in the microcontroller measurements of active current consumption for XBee S2C can be applied. An active current consumption of 11.4 mA is measured during the transmission process, as shown in Figure 10b. However, the sleep current consumption of XBee S2C is measured to be 0.58 mA using the digital multimeter. The parameters in Equations (2)- (11) with and without the sleep/wake scheme for the sensor node are presented in Table 4. Furthermore, the parameters in Equations (12)- (15) for the router and main router nodes are provided in Table 4. The values in Table 4 are measured for active time, sleep time, DC, and active and sleep current consumption with and without the sleep/wake scheme. In the WSN hardware, the main power is consumed by the RF module during data transmission and reception [56]. Therefore, the same procedure as that in the microcontroller measurements of active current consumption for XBee S2C can be applied. An active current consumption of 11.4 mA is measured during the transmission process, as shown in Figure 10b. However, the sleep current consumption of XBee S2C is measured to be 0.58 mA using the digital multimeter. The parameters in Equations (2)- (11) with and without the sleep/wake scheme for the sensor node are presented in Table 4. Furthermore, the parameters in Equations (12)-(15) for the router and main router nodes are provided in Table 4. The values in Table 4 are measured for active time, sleep time, DC, and active and sleep current consumption with and without the sleep/wake scheme. Figure 11 shows the current consumption of each component of the sensor node with and without the sleep/wake scheme. The current consumption of XBee S2C is the highest among all the components of the sensor node because RF components frequently transmit data with maximum output power to ensure data delivery.
Power Consumption Based on the Sleep/Wake Scheme
sleep/wake Llife with sleep/wake (days) 148 (3554 h) Equation (11) 112 (2687 h) Equation (11) 58 (1398 h) Equation (11) Llife without sleep/wake (days) 5.6 (134 h) Equation (11) 6 (149 h) Equation (11) 6 (149 h) Equation (11) Figure 11 shows the current consumption of each component of the sensor node with and without the sleep/wake scheme. The current consumption of XBee S2C is the highest among all the components of the sensor node because RF components frequently transmit data with maximum output power to ensure data delivery. In the current work, +5 dBm (3.1 mW) is adopted to ensure communication between WSN nodes in the farm field. Figure 12 shows the power consumption of the router and main router nodes. The current consumption of the router nodes is considerably improved by the sleep/wake scheme. The power consumed by the router nodes is higher than that consumed by the sensor node (Table 4). This result is attributed to the following conditions:
Power Consumption Based on the Sleep/Wake Scheme
(i) The router node collects climate condition data from four sensor nodes within 16 s, whereas the sensor node measures data using two sensors within 2 s. In the current work, +5 dBm (3.1 mW) is adopted to ensure communication between WSN nodes in the farm field. Figure 12 shows the power consumption of the router and main router nodes. The current consumption of the router nodes is considerably improved by the sleep/wake scheme. The power consumed by the router nodes is higher than that consumed by the sensor node (Table 4). This result is attributed to the following conditions: (i) The router node collects climate condition data from four sensor nodes within 16 s, whereas the sensor node measures data using two sensors within 2 s. (ii) The payload of the router node is higher than those of the sensor nodes, as indicated in Figure 4b,c. (iii) The DC of the router node is larger than that of the sensor node, as shown in Figure 6. (iii) The DC of the router node is larger than that of the sensor node, as shown in Figure 6. Figure 13 illustrates that when the sleep/wake scheme is applied, the power savings of the sensor node are considerably increased to 96% relative to that in traditional operation, which consumes 19.35 mA. Power saving is computed using Equation (17) [57]: Figure 13 illustrates that when the sleep/wake scheme is applied, the power savings of the sensor node are considerably increased to 96% relative to that in traditional operation, which consumes 19.35 mA. Power saving is computed using Equation (17) [57]: Power savings = 1 − current consumption sleep/wake scheme current consumption traditional operation × 100% (17) Figure 13 illustrates that when the sleep/wake scheme is applied, the power savings of the sensor node are considerably increased to 96% relative to that in traditional operation, which consumes 19.35 mA. Power saving is computed using Equation (17) [57]: Figure 13. Developed power savings of the WSN nodes based on the sleep/wake scheme.
In addition, power savings are improved to 90%, 99%, 98%, and 95% for soil moisture, DHT11, Atmega 328p, and XBee S2C relative to their values in traditional operation, respectively. Power savings are increased to 94% and 89% for the router and main router nodes, respectively (Figure 13), relative to traditional operation (i.e., without the sleep/wake scheme), which consumes 17.4 mA for the router and main router nodes. Power savings are computed using Equation (17). Figure 13 shows that the power savings of the sensor node are better than those of the router and main router nodes for the same reasons mentioned previously. The power saving of the router node is 96.7% and 93% for Atmega 328p and XBee S2C relative to traditional operation, respectively. The power saving of the main router is 91.5% and 88% for Atmega 328p and XBee S2C relative to traditional operation, respectively.
Battery Life Estimation Based on the Sleep/Wake Scheme
The application of the sleep/wake scheme verifies that this approach can improve the power consumption of the proposed WAS to 0.731 (sensor node), 0.967 (router node), and 1.86 mA (main router node). Consequently, battery life can be extended to 148 (sensor node), 112 (router node), and 58 (main router node) days by using two rechargeable Li-ion battery with 7.4 V/2600 mAh. The battery life of the sensor node and the router/ main router nodes is 5.6 days and 6 days in the In addition, power savings are improved to 90%, 99%, 98%, and 95% for soil moisture, DHT11, Atmega 328p, and XBee S2C relative to their values in traditional operation, respectively. Power savings are increased to 94% and 89% for the router and main router nodes, respectively ( Figure 13), relative to traditional operation (i.e., without the sleep/wake scheme), which consumes 17.4 mA for the router and main router nodes. Power savings are computed using Equation (17). Figure 13 shows that the power savings of the sensor node are better than those of the router and main router nodes for the same reasons mentioned previously. The power saving of the router node is 96.7% and 93% for Atmega 328p and XBee S2C relative to traditional operation, respectively. The power saving of the main router is 91.5% and 88% for Atmega 328p and XBee S2C relative to traditional operation, respectively.
Battery Life Estimation Based on the Sleep/Wake Scheme
The application of the sleep/wake scheme verifies that this approach can improve the power consumption of the proposed WAS to 0.731 (sensor node), 0.967 (router node), and 1.86 mA (main router node). Consequently, battery life can be extended to 148 (sensor node), 112 (router node), and 58 (main router node) days by using two rechargeable Li-ion battery with 7.4 V/2600 mAh. The battery life of the sensor node and the router/ main router nodes is 5.6 days and 6 days in the traditional operation, respectively. The battery life for a particular energy of the battery, which relies on Equation (11) at the current usage of the sensor, router, and main router nodes, is presented in the relationship in Figure 14. The figure also shows the improvement in the current drained by the nodes of the WAS when the sleep/wake scheme is considered.
Results of the SWORD Algorithm
The application of the SWORD algorithm depends on soil moisture measurements. Air temperature, air humidity, and soil moisture are measured by the sensor node of the WAS. Experiments are conducted several times on different days between June 2018 and July 2018 to measure the climate conditions in the farm. However, the measurements ( Figure 15) performed on 8 July 2018 are considered in this study to verify the performance of the proposed SWORD algorithm because the weather conditions on that day are harsh (e.g., high air temperature and low soil humidity) in the considered area. The measurements are configured to capture data every 15 min, as
Results of the SWORD Algorithm
The application of the SWORD algorithm depends on soil moisture measurements. Air temperature, air humidity, and soil moisture are measured by the sensor node of the WAS. Experiments are conducted several times on different days between June 2018 and July 2018 to measure the climate conditions in the farm. However, the measurements ( Figure 15) performed on 8 July 2018 are considered in this study to verify the performance of the proposed SWORD algorithm because the weather conditions on that day are harsh (e.g., high air temperature and low soil humidity) in the considered area. The measurements are configured to capture data every 15 min, as seen in the timing diagram of the sleep/wake scheme in Figure 6. The sensor node transmits the climate conditions to the coordinator node every 15 min via the router and main router nodes. In this case, 96 samples (from 4 samples every hour) are collected in a day ( Figure 15).
Results of the SWORD Algorithm
The application of the SWORD algorithm depends on soil moisture measurements. Air temperature, air humidity, and soil moisture are measured by the sensor node of the WAS. Experiments are conducted several times on different days between June 2018 and July 2018 to measure the climate conditions in the farm. However, the measurements ( Figure 15) performed on 8 July 2018 are considered in this study to verify the performance of the proposed SWORD algorithm because the weather conditions on that day are harsh (e.g., high air temperature and low soil humidity) in the considered area. The measurements are configured to capture data every 15 min, as seen in the timing diagram of the sleep/wake scheme in Figure 6. The sensor node transmits the climate conditions to the coordinator node every 15 min via the router and main router nodes. In this case, 96 samples (from 4 samples every hour) are collected in a day ( Figure 15). The SWORD algorithm checks whether the measured data regarding soil moisture are redundant. When the difference between the preceding and subsequent soil moisture measurements is zero or less than or equal to 5%, the data are not transmitted to the router node. Otherwise, the data are transmitted to the router node. The transmitted data are minimized through this strategy. Thus, The SWORD algorithm checks whether the measured data regarding soil moisture are redundant. When the difference between the preceding and subsequent soil moisture measurements is zero or less than or equal to 5%, the data are not transmitted to the router node. Otherwise, the data are transmitted to the router node. The transmitted data are minimized through this strategy. Thus, the power consumption of the sensor node is considerably improved relative to the sleep/wake scheme and traditional operation. In the SWORD algorithm, the transmitted data from the sensor node to the router node are minimized to 13 samples, as shown in Figure 16. Soil moisture measurements are weighted within the range of 45-75%. The reading range of 45-49% denotes dry soil, 50-55% represents moderate moist or field capacity, and 55-75% indicates wet or saturated soil. The SWORD algorithm minimizes data transmission (data_tx) by 86.45% relative to the sleep/wake scheme, as obtained using Equation (18): samples based on SWORD algorithm samples based on sleep/wake scheme × 100% Therefore, the average current consumption of the sensor node is remarkably improved to 0.1 mA by applying Equation (19). In this case, a power consumption of 0.74 mW can be dissipated during the transmission of climate conditions in the sensor node based on the SWORD algorithm: where I SN_avg is the average current consumption of 0.731 mA of the sensor node, which is previously computed using Equation (7) based on the application of the sleep/wake scheme. mA by applying Equation (19). In this case, a power consumption of 0.74 mW can be dissipated during the transmission of climate conditions in the sensor node based on the SWORD algorithm: where _ is the average current consumption of 0.731 mA of the sensor node, which is previously computed using Equation (7) based on the application of the sleep/wake scheme. Consequently, battery life can be prolonged to 1093 days (3 years) by applying Equation (11) and using the same battery capacity of the sensor node (i.e., 7.4/2600 mAh). The battery life for a particular energy of the battery at the current usage of the sensor node is estimated in the relationship shown in Figure 17. The figure indicates that the power savings of the sensor node based on the SWORD algorithm considerably improve battery life to 86.45% and 99.48% relative to the sleep/wake scheme and traditional operation, respectively. However, these percentages may increase or decrease depending on the weather, soil conditions, and farm location. As weather temperature increases, soil condition changes rapidly from moisturized to dehydrated, thereby remarkably changing soil moisture measurements. Data transmission can occur, and power consumption increases. By contrast, data transmission and power consumption are considerably minimized when soil moisture is constant under good weather or rainfall conditions. During rainfall condition, the microcontroller measures soil moisture based on the soil humidity sensor to control the operation of the SWORD algorithm. In case of rainfall, the soil is wet, irrigation is unnecessary, and no data transmission occurs from the sensor node to the router node. Consequently, the sensors and XBee module of the sensor node enter sleep mode to conserve energy. In addition, power consumption and data transmission vary from summer to winter. However, soil moisture changes according to temperature and humidity. In our work, we set the constant value of the soil moisture (i.e., threshold) to trigger the sensor to alert the users if intervention is necessary, such as a switch on the water irrigation system Consequently, battery life can be prolonged to 1093 days (3 years) by applying Equation (11) and using the same battery capacity of the sensor node (i.e., 7.4/2600 mAh). The battery life for a particular energy of the battery at the current usage of the sensor node is estimated in the relationship shown in Figure 17. The figure indicates that the power savings of the sensor node based on the SWORD algorithm considerably improve battery life to 86.45% and 99.48% relative to the sleep/wake scheme and traditional operation, respectively. However, these percentages may increase or decrease depending on the weather, soil conditions, and farm location. As weather temperature increases, soil condition changes rapidly from moisturized to dehydrated, thereby remarkably changing soil moisture measurements. Data transmission can occur, and power consumption increases. By contrast, data transmission and power consumption are considerably minimized when soil moisture is constant under good weather or rainfall conditions. During rainfall condition, the microcontroller measures soil moisture based on the soil humidity sensor to control the operation of the SWORD algorithm. In case of rainfall, the soil is wet, irrigation is unnecessary, and no data transmission occurs from the sensor node to the router node. Consequently, the sensors and XBee module of the sensor node enter sleep mode to conserve energy. In addition, power consumption and data transmission vary from summer to winter. However, soil moisture changes according to temperature and humidity. In our work, we set the constant value of the soil moisture (i.e., threshold) to trigger the sensor to alert the users if intervention is necessary, such as a switch on the water irrigation system or based on an automated irrigation system. The percentage values of 40-50% are considered acceptable soil moisture values in plant growing environments as proven in [58]. or based on an automated irrigation system. The percentage values of 40-50% are considered acceptable soil moisture values in plant growing environments as proven in [58]. The output power of the solar panel is adequate to supply the sensor node because the power consumption of the sensor node varies from 0.75 (SWORD algorithm) and 5.409 (sleep/wake scheme) to 143.19 mW (traditional operation). This result is based on the recommendation in [26], which indicates that the capacity of a solar cell should be at least six times the average power consumption of the load. The power consumption of the router and main router nodes is 7.155 mW and 13.764 mW for the sleep/wake scheme, whereas the value is 128.76 mW for traditional operation. The energy of the adopted solar cell can successively supply the sensor, router, and main router nodes. Consequently, when the solar cell (12 V/5 W) is used with the WAS hardware to supply power to the nodes alongside the SWORD algorithm, an infinite energy supply for the WAS node can be expected.
Power Consumption Comparison
The power consumption of the WAS using the sleep/wake scheme and the SWORD algorithm is compared with the power consumption of other schemes presented in existing studies on agriculture monitoring systems (Figure 18) to verify the proposed system. The output power of the solar panel is adequate to supply the sensor node because the power consumption of the sensor node varies from 0.75 (SWORD algorithm) and 5.409 (sleep/wake scheme) to 143.19 mW (traditional operation). This result is based on the recommendation in [26], which indicates that the capacity of a solar cell should be at least six times the average power consumption of the load. The power consumption of the router and main router nodes is 7.155 mW and 13.764 mW for the sleep/wake scheme, whereas the value is 128.76 mW for traditional operation. The energy of the adopted solar cell can successively supply the sensor, router, and main router nodes. Consequently, when the solar cell (12 V/5 W) is used with the WAS hardware to supply power to the nodes alongside the SWORD algorithm, an infinite energy supply for the WAS node can be expected.
Power Consumption Comparison
The power consumption of the WAS using the sleep/wake scheme and the SWORD algorithm is compared with the power consumption of other schemes presented in existing studies on agriculture monitoring systems ( Figure 18) to verify the proposed system. nodes alongside the SWORD algorithm, an infinite energy supply for the WAS node can be expected. Figure 17. Estimated battery life versus battery capacity in the sensor node based on the SWORD algorithm, sleep/wake scheme, and traditional operation.
Power Consumption Comparison
The power consumption of the WAS using the sleep/wake scheme and the SWORD algorithm is compared with the power consumption of other schemes presented in existing studies on agriculture monitoring systems ( Figure 18) to verify the proposed system. The performance of this work in terms of current consumption is achieved based on the methodology designed in Sections 6-8 and is validated by the actual hardware (prototype) implementation (Section 4) and on-site measurement (Section 9). The proposed algorithms are compared with those presented in related studies based on the values recorded during measurements The performance of this work in terms of current consumption is achieved based on the methodology designed in Sections 6-8 and is validated by the actual hardware (prototype) implementation (Section 4) and on-site measurement (Section 9). The proposed algorithms are compared with those presented in related studies based on the values recorded during measurements in the previous literature. The performance in terms of average current consumption of previous methods is presented in detail in Section 2.
Although the previous works are based on energy efficient approaches, however, it is important to acknowledge that the measurement taken from this study might differ from the previous works in the literature, taken into account different factors, which include: (i) the sensors' power requirements, (ii) techniques/algorithms introduced in the previous works and (iii) variation of the agriculture conditions (such as humidity, temperature and other environmental conditions) where the previous works have been deployed.
The sensor node current consumption of the proposed WAS is reduced based on two approaches: the first is sleep/wake scheme adopting DC and the second is SWORD algorithm. The current consumption of the sensor node (0.731 mA), router node (0.967 mA), and main router node (1.86 mA) are achieved based on sleep/wake scheme using low DC. We observe that the current consumption of the proposed WAS using the SWORD algorithm is approximately similar to the performance of the approaches in [35][36][37], which have achieved a current consumption of 0.1 (based on sleep/wake), 0.118 (based on sleep/wake), and 0.227 mA (based on DC), respectively. The average current consumption of our proposed WAS is 0.1 mA and 0.731 mA for the SWORD algorithm and the sleep/wake scheme, respectively, as shown in Figure 18.
Conclusions
We propose an energy-efficient scheme known as the WAS for agricultural application. The system is designed and practically implemented to monitor the climate conditions of crops, including air temperature, air humidity, and soil moisture. System hardware is carefully selected based on energy-efficient criteria to minimize the power consumption of different nodes in the WSN, such as the low-power Zigbee wireless protocol (i.e., XBee S2C), the standalone Atmega 328p microcontroller, and low-power sensors. In addition, two power reduction techniques are proposed to further improve the power consumption of the sensor, router, and main router nodes. These two schemes are the sleep/wake scheme and the SWORD algorithm. The sleep/wake scheme is achieved based on different DCs of the nodes in the WAS (i.e., 0.222% for the sensor node, 1.778% for the router node, and 7.111% for the main router node). The SWORD algorithm combines the sleep/wake scheme and the minimization of redundant data from sensor node packets. The power consumption of the sensor, router, and main router nodes is considerably improved when the two proposed methods are used.
The power saving and data communication achieved by applying the SWORD algorithm may be increased or decreased depending on the redundant data from soil moisture measurements, which depend on the soil condition (wet or dry). The power savings in the current work are improved by 86.45% and 99.48% relative to the sleep/wake scheme and traditional operation, respectively. In addition, data communication is minimized by 86.45%. The results of current consumption are compared with those in previous studies to validate the performance of the proposed system. The proposed WAS allows data collection for decision support in farming fields and can assist users in automating irrigation systems in agricultural fields in the future. The use of an energy-efficient and advanced WSN technology can achieve highly productive and sustainable precision farming. | 17,178 | 2018-10-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science",
"Computer Science"
] |
Selected ‘Starter Kit’ energy system modelling data for Malawi (#CCG)
Energy system modelling can be used to assess the implications of different scenarios and support improved policymaking. However, access to data is often a barrier to starting energy system modelling in developing countries, thereby causing delays. Therefore, this article provides data that can be used to create a simple zero order energy system model for Malawi, which can act as a starting point for further model development and scenario analysis. The data are collected entirely from publicly available and accessible sources, including the websites and databases of international organizations, journal articles, and existing modelling studies. This means that the dataset can be easily updated based on the latest available information or more detailed and accurate local data. These data were also used to calibrate a simple energy system model using the Open Source Energy Modelling System (OSeMOSYS) and three stylized scenarios (Fossil Future, Least Cost and Net Zero by 2050) for 2020–2050. The assumptions used and results of these scenarios are presented in the appendix as an illustrative example of what can be done with these data. This simple model can be adapted and further developed by in-country analysts and academics, providing a platform for future work.
information or more detailed and accurate local data. These data were also used to calibrate a simple energy system model using the Open Source Energy Modelling System (OSeMOSYS) and three stylized scenarios (Fossil Future, Least Cost and Net Zero by 2050) for 2020-2050. The assumptions used and results of these scenarios are presented in the appendix as an illustrative example of what can be done with these data. This simple model can be adapted and further developed by incountry analysts and academics, providing a platform for future work. Table Subject Energy
Value Of The Data
These data can be used to develop national energy system models to inform national energy investment outlooks and policy plans, as well as provide insights on the evolution of the electricity supply system under different trajectories.
The data are useful for country analysts, policy makers and the broader scienti c community, as a zero-order starting point for model development.
These data could be used to examine a range of possible energy system pathways, in addition to the examples given in this study, to provide further insights on the evolution of the country's power system.
The data can be used both for conducting an analysis of the power system but also for capacity building activities. Also, the methodology of translating the input data into modelling assumptions for a cost-optimization tool is presented 1 Data Description The data provided in this paper can be used as input data to develop an energy system model for Malawi. As an illustration, these data were used to develop an energy system model using the cost-optimization tool OSeMOSYS for the period 2015-2050. For reference, that model is described in Appendix A and its data les are available as Supplementary Materials. Appendix gure A3 for Malawi is repeated below. This is purely illustrative. It shows a zero-order model of the production of electricity by technology over the period 2020 to 2050 for a least cost energy future. Using the data described in this article, the analyst can reproduce this, as well as many other scenarios, such as net-zero by 2050, in a variety of energy planning toolkits.
The data provided were collected from publicly available sources, including the reports of international organizations, journal articles and existing model databases. The dataset includes the techno-economic parameters of supply-side technologies, installed capacities, emissions factors and nal electricity demands. Below shows the different items and their description, in order of appearance, presented in this article.
Item
Description of Content
Existing Electricity Supply System
The total power generation capacity in Malawi is estimated at 393.6 MW in 2018 [3,4,5,6]. The estimated existing power generation capacity is detailed in Table 1 below [3,4,5,6]. The methods used to calculate these estimates are described in more detail in Sect. 2.1. Data on the installation year of each power plant can be found in the country dataset published on Zenodo.
here which is useful for developing a zero order Tier 2 national energy model [1]. This is consistent with U4RIA energy planning goals [2].
Techno-economic Data for Electricity Generation Technologies
The techno-economic parameters of electricity generation technologies are presented in Table 2, including costs, operational lives, e ciencies and average capacity factors. Cost (capital and xed), operational life and e ciency data were collected from reports by the International Renewable Energy Agency [7,8,9] and are applicable to all of Africa. These cost data include projected cost reductions for renewable energy technologies, which are presented in [3,7,8,9,10,11,12] Table 3 Projected costs of renewable energy technologies for selected years to 2050. [7,9] Power
Techno-economic Data for Power Transmission and Distribution
The techno-economic parameters of transmission and distribution technologies were taken from the Reference Case scenario of The Electricity Model Base for Africa (TEMBA) [13]. According to these data, the e ciencies of power transmission and distribution in Malawi are assumed to reach 95.0% and 86.0% respectively in 2030. In the following table, the techno-economic parameters associated with the transmission and distribution network are presented.
Techno-economic Data for Re neries
Malawi has no reported domestic re nery capacity [14]. In the OSeMOSYS model, two oil re nery technologies were made available for investment in the future, each with different output activity ratios for Heavy Fuel Oil (HFO) and Light Fuel Oil (LFO). The technoeconomic data for these technologies are shown in Table 5.
Emission Factors
Fossil fuel technologies emit several greenhouse gases, including carbon dioxide, methane and nitrous oxides throughout their operational lifetime. In this analysis, only carbon dioxide emissions are considered. These are accounted for using carbon dioxide emission factors assigned to each fuel, rather than each power generation technology. The assumed emission factors are presented in Table 7. Tables 8 and 9 show estimated domestic renewable energy potentials and fossil fuel reserves respectively in Malawi.
Electricity Supply System Data
Data on Malawi's existing on-grid power generation capacity, presented in Table 1, were extracted from the PLEXOS World dataset [3,4,5] using scripts from OSeMOSYS global model generator [24]. PLEXOS World provides estimated capacities and commissioning dates by power plant, based on the World Resources Institute Global Power Plant database [5]. These data were used to estimate installed capacity in future years based on the operational life data in Table 2. Data on Malawi's offgrid renewable energy capacity were sourced from yearly capacity statistics produced by IRENA [6]. Cost, e ciency and operational life data in Table 2 were collected from reports by IRENA [7,8,9], which provide generic estimates for these parameters by technology. These reports also provide projections of future costs for renewable energy technologies. These data are presented in Table 3 and Fig. 1, where it was assumed that costs fall linearly between the data points provided by IRENA and that costs remain constant beyond 2040 when the IRENA forecasts end.
Country-speci c capacity factors for solar PV, onshore wind and hydropower were sourced from Renewables Ninja and the PLEXOS-World 2015 Model Dataset [3,10,11]. These sources provide hourly capacity factors for 2015 for solar PV and wind, and 15-year averages monthly capacity factors for hydropower, the average values of which are presented in Table 2. These data were also used to estimate capacity factors for 8 time slices used in the OSeMOSYS model (see detail in Annex 1).
Capacity factors for other technologies were sourced from reports by IRENA [8,12], which provide generic estimates for each technology. The costs and e ciencies of power transmission and distribution were sourced from TEMBA reference case [23], which provides generic cost estimates and country-speci c e ciencies which consider expected e ciency improvements in the future. Techno-economic data for re neries were sourced from the IEA Energy Technology Systems Analysis Programme (ETSAP) [15], which provides generic estimates of costs and performance parameters, while the re nery options modelled are based on the methods used in TEMBA [13].
Fuel Data
The crude oil price is based on an international price forecast produced by the US Energy Information Administration (EIA), which runs to 2050 [16]. The price was increased by 10% for imported oil to re ect the cost of importation. The price of imported HFO and LFO were calculated by multiplying the oil price by 0.8 and 1.33 respectively, based on the methods used in TEMBA [13]. The prices of coal, natural gas and biomass were sourced from an IRENA report [8], which provides generic estimates for costs to 2030. Again, a linear rate of change was assumed between data points from IRENA, and the forecast was extended to 2040 using the rate of change between 2020 and 2030. Prices were then assumed constant after 2040. The cost of domestically-produced biomass was increased by 10% to estimate a cost of imported biomass.
Emissions Factors and Domestic Reserves
Emissions factors were collected from the IPCC Emission Factor Database [17], which provides carbon emissions factors by fuel. Domestic renewable energy potentials for solar PV, CSP and wind were collected from an IRENA-KTH working paper [18], which provides estimates of potential yearly generation by country in Africa. Other renewable energy potentials were sourced from a regional report by IRENA [19] and the World Small Hydropower Development Report [20], which provide estimated potentials in MW by country. Estimated domestic fossil fuel reserves are from the websites of The World Bank and US EIA [21,22], which provide estimates of reserves by country.
Electricity Demand Data
The nal electricity demand projection is based on data from the TEMBA Reference Scenario dataset [23], which provides yearly total demand estimates from 2015-2070 under a reference case scenario.
Ethics Statement
Not applicable.
Figure 1
Projected costs of renewable energy technologies for selected years to 2050 [7,9] Final Electricity Demand Projection (PJ) for Malawi [23] Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. Appendix.docx | 2,328.2 | 2021-04-30T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Seco-Tetracenomycins from the Marine-Derived Actinomycete Saccharothrix sp. 10-10
Six new tetracenomycin congeners, saccharothrixones E–I (1–5) and 13-de-O-methyltetracenomycin X (6), were isolated from the rare marine-derived actinomycete Saccharothrix sp. 10-10. Their structures were elucidated by spectroscopic analysis and time-dependent density functional theory (TDDFT)-electronic circular dichroism (ECD) calculations. Saccharothrixones G (3) and H (4) are the first examples of tetracenomycins featuring a novel ring-A-cleaved chromophore. Saccharothrixone I (5) was determined to be a seco-tetracenomycin derivative with ring-B cleavage. The new structural characteristics, highlighted by different oxidations at C-5 and cleavages in rings A and B, enrich the structural diversity of tetracenomycins and provide evidence for tetracenomycin biosynthesis. Analysis of the structure–activity relationship of these compounds confirmed the importance of the planarity of the naphthacenequinone chromophore and the methylation of the polar carboxy groups for tetracenomycin cytotoxicity.
Introduction
Aromatic polyketides constitute a large group of structurally diverse natural products biosynthesized by type II polyketide synthases (PKS II) [1]. Many of these natural products have been widely used as antibacterial, antifungal, and anticancer agents [2,3]. Based on the polyphenolic ring systems and their biosynthetic pathways, bacterial aromatic polyketides are classified as anthracyclines, angucyclines, aureolic acids, tetracyclines, tetracenomycins (Tcms), benzoisochromanequinones, and pentangular polyphenols [4]. Tcms, which have been isolated from Streptomyces glaucescens and Streptomyces olivaceus, represent a separate group of aromatic polyketides featuring a tetracyclic naphthacenequinone chromophore with highly hydroxylated cyclohexenone moiety, and exhibit moderate antibacterial and antitumor activities [5]. Terrestrial and marine actinomycetes are particularly rich sources of bioactive PKS II metabolites. With the advent of molecular tools and advances in biosynthetic-mechanism research, genetic-level investigation of bacterial aromatic polyketides has become possible [6]. PCR-based genetic screening has become a useful approach to identify novel metabolites with desired structural characteristics [7,8].
As part of our screening program for new antibiotics from marine-derived microorganisms [9][10][11], we previously identified Tcm X (7) and four Tcm analogs, saccharothrixones A, B (8), C (9), and D, from the rare actinomycete Saccharothrix sp. 10-10 by PCR screening [12,13]. To investigate the structural diversity of Tcms produced by strain 10-10 and, thereby, to explore structure-activity relationships, we further analyzed the LC-MS data of other fractions of the culture extracts and identified six new Tcm analogs, saccharothrixones E-I (1)(2)(3)(4)(5), and 13-de-O-methyltetracenomycin X (6, Figure 1). Saccharothrixone E (1) and F (2) were identified as 5-de-oxo-5-hydroxy derivatives of Tcm X and C, respectively. Saccharothrixones G (3) and H (4) were C-4 epimers of seco-tetracenomycins that featured a unique ring-A-cleaved chromophore. This paper describes the isolation and structural characterization of the six new tetracenomycin congeners 1-6, as well as the structure-activity relationship of their cytotoxicity.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 2 of 10 As part of our screening program for new antibiotics from marine-derived microorganisms [9][10][11], we previously identified Tcm X (7) and four Tcm analogs, saccharothrixones A, B (8), C (9), and D, from the rare actinomycete Saccharothrix sp. 10-10 by PCR screening [12,13]. To investigate the structural diversity of Tcms produced by strain 10-10 and, thereby, to explore structure-activity relationships, we further analyzed the LC-MS data of other fractions of the culture extracts and identified six new Tcm analogs, saccharothrixones E-I (1)(2)(3)(4)(5), and 13-de-O-methyltetracenomycin X (6, Figure 1). Saccharothrixone E (1) and F (2) were identified as 5-de-oxo-5-hydroxy derivatives of Tcm X and C, respectively. Saccharothrixones G (3) and H (4) were C-4 epimers of secotetracenomycins that featured a unique ring-A-cleaved chromophore. This paper describes the isolation and structural characterization of the six new tetracenomycin congeners 1-6, as well as the structure-activity relationship of their cytotoxicity.
Results
Saccharothrixone E (1) was isolated as a yellow powder. Its molecular formula was determined to be C24H24O11 by HRESIMS, which is 2 mass units higher than Tcm X (7). The UV spectrum of 1 exhibited absorption maxima at 266, 277, and 382 nm. The 1 H NMR spectrum in acetone-d6 (Table 1) displayed characteristic signals for two aromatic protons (δH 7.44 (s, H-6) and 7.20 (s, H-7)), an olefinic proton (δH 5.61 (s, H-2)), and two oxygenated methine protons (δH 4.77 (brs, H-5) and 4.62 (brs, H-4)). In addition, an olefinic methyl and four methoxy signals were observed at δH 2.79-3.94 ppm. In the 1 H NMR spectrum recorded in DMSO-d6 (Table S1), characteristic feature was the presence of four exchangeable protons at δH 14.70 (brs), 6.07 (brs), 5.47 (d), and 5.23 (s). These spectroscopic data implied that the structure of 1 is closely related to 7. The 13 C NMR (Table 2) and HSQC spectra of compound 1 revealed the presence of 24 carbons, and also indicated its close similarity to 7 (Table S2). One of the three ketonic carbonyl resonances observed for 7 was missing in 1, and, instead, an Obearing methine carbon signal was observed at δC 69.6. This indicated that one of the ketone groups in 7 was replaced by a hydroxy-substituted carbon in 1. A detailed comparison of the NMR data of 1 and 7 revealed the structural similarities in their A, C, and D rings.
Results
Saccharothrixone E (1) was isolated as a yellow powder. Its molecular formula was determined to be C 24 H 24 O 11 by HRESIMS, which is 2 mass units higher than Tcm X (7). The UV spectrum of 1 exhibited absorption maxima at 266, 277, and 382 nm. The 1 H NMR spectrum in acetone-d 6 ( Table 1) displayed characteristic signals for two aromatic protons (δ H 7.44 (s, H-6) and 7.20 (s, H-7)), an olefinic proton (δ H 5.61 (s, H-2)), and two oxygenated methine protons (δ H 4.77 (brs, H-5) and 4.62 (brs, H-4)). In addition, an olefinic methyl and four methoxy signals were observed at δ H 2.79-3.94 ppm. In the 1 H NMR spectrum recorded in DMSO-d 6 (Table S1), characteristic feature was the presence of four exchangeable protons at δ H 14.70 (brs), 6.07 (brs), 5.47 (d), and 5.23 (s). These spectroscopic data implied that the structure of 1 is closely related to 7. The 13 C NMR (Table 2) and HSQC spectra of compound 1 revealed the presence of 24 carbons, and also indicated its close similarity to 7 (Table S2). One of the three ketonic carbonyl resonances observed for 7 was missing in 1, and, instead, an O-bearing methine carbon signal was observed at δ C 69.6. This indicated that one of the ketone groups in 7 was replaced by a hydroxy-substituted carbon in 1. A detailed comparison of the NMR data of 1 and 7 revealed the structural similarities in their A, C, and D rings. The HMBC correlations ( Figure 2) from H-7 to C-8 (δ C 158.1), C-9, C-10 (δ C 137.8), C-10a, and C-13 (δ C 168.6), and from H-6 to C-5a, C-6a, C-10a, C-11 (δ C 166.7), and C-11a confirmed that the naphthalene ring substitution pattern of 1 is identical to that of 7. The correlation from H-2 to C-1 (δ C 193.6) and the relatively weaker 4 J correlation from H-6 to C-12 (δ C 202.2) observed in the HMBC spectrum suggested that the ketone groups at C-1 and C-12 that are characteristic for tetracenomycins remained intact in 1. The strong 3 J HMBC correlation from H-6 to C-5 (δ C 69.6) indicated that the carbonyl group at C-5 in 7 was replaced by an oxygenated methine group in 1. The remaining HMBC correlations confirmed structure 1 as 5-de-oxo-5-hydroxytetracenomycin X.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 3 of 10 The HMBC correlations ( Figure 2) from H-7 to C-8 (δC 158.1), C-9, C-10 (δC 137.8), C-10a, and C-13 (δC 168.6), and from H-6 to C-5a, C-6a, C-10a, C-11 (δC 166.7), and C-11a confirmed that the naphthalene ring substitution pattern of 1 is identical to that of 7. The correlation from H-2 to C-1 (δC 193.6) and the relatively weaker 4 J correlation from H-6 to C-12 (δC 202.2) observed in the HMBC spectrum suggested that the ketone groups at C-1 and C-12 that are characteristic for tetracenomycins remained intact in 1. The strong 3 J HMBC correlation from H-6 to C-5 (δC 69.6) indicated that the carbonyl group at C-5 in 7 was replaced by an oxygenated methine group in 1. The remaining HMBC correlations confirmed structure 1 as 5-de-oxo-5-hydroxytetracenomycin X. The relative configuration of compound 1 was determined by interpretation of the ROESY data. The ROESY cross-peaks ( Figure 3) of OH-4a with OH-4, OH-5, and 12a-OCH3, and of OH-4 with OH-5 indicated that they were all located on the same β-face of the ring. The ROESY correlations of H-5 with H-4 and H-2 indicated that H-4 and H-5 were on the α-face. Since the relative structure of ring A in 1 is consistent with those in Tcms C [5] and X (7) [14], it was deduced that compound 1 has the same absolute configuration as 7. The absolute configuration of 1 was further confirmed using timedependent density functional theory (TDDFT)-electronic circular dichroism (ECD) calculations [15]. The ECD spectra of the possible isomers of 1 obtained by geometry optimization were generated using TDDFT calculations at the B3LYP/6-311++G(d,p) level. The ECD spectrum calculated for the The relative configuration of compound 1 was determined by interpretation of the ROESY data. The ROESY cross-peaks ( Figure 3) of OH-4a with OH-4, OH-5, and 12a-OCH 3 , and of OH-4 with OH-5 indicated that they were all located on the same β-face of the ring. The ROESY correlations of H-5 with H-4 and H-2 indicated that H-4 and H-5 were on the α-face. Since the relative structure of ring A in 1 is consistent with those in Tcms C [5] and X (7) [14], it was deduced that compound 1 has the same absolute configuration as 7. The absolute configuration of 1 was further confirmed using time-dependent density functional theory (TDDFT)-electronic circular dichroism (ECD) calculations [15]. The ECD spectra of the possible isomers of 1 obtained by geometry optimization were generated using TDDFT calculations at the B3LYP/6-311++G(d,p) level. The ECD spectrum calculated for the 4S,4aR,5S,12aR-1a isomer was in good agreement with the experimental ECD curve ( Figure 4). Consequently, the absolute configuration of 1 was firmly assigned as 4S,4aR,5S,12aR.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 4 of 10 4S,4aR,5S,12aR-1a isomer was in good agreement with the experimental ECD curve ( Figure 4). Consequently, the absolute configuration of 1 was firmly assigned as 4S,4aR,5S,12aR. The molecular formula of saccharothrixone F (2) was deduced to be C23H22O11 by HRESIMS, one CH2 unit less than that of 1. The UV spectrum of 2 displayed absorption bands almost identical to those of 1, indicating that 1 and 2 have the same chromophore. A comparison of the 1 H and 13 C NMR data of 1 and 2 revealed the absence of one of the four O-methyl groups of 1 in 2. As compared with that of 1, the resonance for C-12a in 2 was shifted by Δδ −5.3. These data suggested that 2 is the 12ade-O-methyl analogue of 1. This was further confirmed by 2D NMR experiments. The HMBC displayed cross-peaks from the three O-methyl protons at δH 3.88, 3.96, and 3.90 to C-3 (δC 176.6), C-8 (δC 158.0), and C-13 (δC 168.6), respectively, indicating that the methoxy groups were located at C-3, C-8, and C-13. The HMBC correlations of H-2 with C-12a (δC 82.5) confirmed the replacement of the 12a-O-methyl in 1 by a hydroxy group in 2. The relative and absolute configurations of 2 were determined to be the same as 1 based on the similarity of the ROESY correlations and ECD spectra. This was also supported by TDDFT-ECD calculations for 2 (Figure 4). Saccharothrixone G (3) was isolated as a yellow powder. The molecular formula was determined to be C24H24O11 by HRESIMS, which is the same as that of 1. Its 1 H and 13 C NMR data (Tables 1 and 2) differed significantly from those of 1, suggesting a substantial structural change. Analysis of the 1 H NMR data measured in DMSO-d6 (Table S1) indicated that one of the four exchangeable proton signals in 1 disappeared in 3. Comparison of their 13 C NMR/HSQC spectra revealed that the nonprotonated carbon (C-12a) in 1 was replaced by an O-bearing methine group in 3, suggesting that 4S,4aR,5S,12aR-1a isomer was in good agreement with the experimental ECD curve (Figure 4). Consequently, the absolute configuration of 1 was firmly assigned as 4S,4aR,5S,12aR. The molecular formula of saccharothrixone F (2) was deduced to be C23H22O11 by HRESIMS, one CH2 unit less than that of 1. The UV spectrum of 2 displayed absorption bands almost identical to those of 1, indicating that 1 and 2 have the same chromophore. A comparison of the 1 H and 13 C NMR data of 1 and 2 revealed the absence of one of the four O-methyl groups of 1 in 2. As compared with that of 1, the resonance for C-12a in 2 was shifted by Δδ −5.3. These data suggested that 2 is the 12ade-O-methyl analogue of 1. This was further confirmed by 2D NMR experiments. The HMBC displayed cross-peaks from the three O-methyl protons at δH 3.88, 3.96, and 3.90 to C-3 (δC 176.6), C-8 (δC 158.0), and C-13 (δC 168.6), respectively, indicating that the methoxy groups were located at C-3, C-8, and C-13. The HMBC correlations of H-2 with C-12a (δC 82.5) confirmed the replacement of the 12a-O-methyl in 1 by a hydroxy group in 2. The relative and absolute configurations of 2 were determined to be the same as 1 based on the similarity of the ROESY correlations and ECD spectra. This was also supported by TDDFT-ECD calculations for 2 (Figure 4). Saccharothrixone G (3) was isolated as a yellow powder. The molecular formula was determined to be C24H24O11 by HRESIMS, which is the same as that of 1. Its 1 H and 13 C NMR data (Tables 1 and 2) differed significantly from those of 1, suggesting a substantial structural change. Analysis of the 1 H NMR data measured in DMSO-d6 (Table S1) indicated that one of the four exchangeable proton signals in 1 disappeared in 3. Comparison of their 13 C NMR/HSQC spectra revealed that the nonprotonated carbon (C-12a) in 1 was replaced by an O-bearing methine group in 3, suggesting that The molecular formula of saccharothrixone F (2) was deduced to be C 23 H 22 O 11 by HRESIMS, one CH 2 unit less than that of 1. The UV spectrum of 2 displayed absorption bands almost identical to those of 1, indicating that 1 and 2 have the same chromophore. A comparison of the 1 H and 13 C NMR data of 1 and 2 revealed the absence of one of the four O-methyl groups of 1 in 2. As compared with that of 1, the resonance for C-12a in 2 was shifted by ∆δ −5.3. These data suggested that 2 is the 12a-de-O-methyl analogue of 1. This was further confirmed by 2D NMR experiments. The HMBC displayed cross-peaks from the three O-methyl protons at δ H 3.88, 3.96, and 3.90 to C-3 (δ C 176.6), C-8 (δ C 158.0), and C-13 (δ C 168.6), respectively, indicating that the methoxy groups were located at C-3, C-8, and C-13. The HMBC correlations of H-2 with C-12a (δ C 82.5) confirmed the replacement of the 12a-O-methyl in 1 by a hydroxy group in 2. The relative and absolute configurations of 2 were determined to be the same as 1 based on the similarity of the ROESY correlations and ECD spectra. This was also supported by TDDFT-ECD calculations for 2 (Figure 4). Saccharothrixone G (3) was isolated as a yellow powder. The molecular formula was determined to be C 24 H 24 O 11 by HRESIMS, which is the same as that of 1. Its 1 H and 13 C NMR data (Tables 1 and 2) differed significantly from those of 1, suggesting a substantial structural change. Analysis of the 1 H NMR data measured in DMSO-d 6 (Table S1) indicated that one of the four exchangeable proton signals in 1 disappeared in 3. Comparison of their 13 C NMR/HSQC spectra revealed that the nonprotonated carbon (C-12a) in 1 was replaced by an O-bearing methine group in 3, suggesting that the fused A and B rings were cleaved at C-12a. HMBC correlations of OH-4a with C-4a and C-12a, and OH-5 with C-5, indicated that the free hydroxy groups at C-4a and C-5 remained in 3, whereas the free hydroxy group at C-4 was missing. Key HMBC correlations from H-4 to the carbonyl carbon C-1 (δ C 172.2) suggested that the oxygenated C-4 was connected to C-1 via an oxygen atom to form a lactone ring. Finally, HMBC correlations from H-4 to C-4a, C-5, and C-12a indicated the connectivity of C-4 and C-4a, thus completing the establishment of the planar structure of 3.
The relative configuration of 3 was established by analysis of the NOE correlations. The NOE enhancement between H-12a and H-5 revealed their 1,3-diaxial positions and a half-chair conformation for ring B (Figure 3). The NOE correlations of H-4 with H-5 and H-12a indicated that the methine group C-4 was in an equatorial orientation in ring B while OH-4a was axial. The ECD spectra of four possible isomers were then calculated using the TDDFT-ECD method. The calculated curves for 4S,4aR,5S,12aS-3a ( Figure 5) at both CAM-B3LYP/TZVP and WB97XD/6-311++G(d,p) levels were in good agreement with the experimental spectrum, thus determining the absolute configuration of 3 to be 4S,4aR,5S,12aS.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 5 of 10 the fused A and B rings were cleaved at C-12a. HMBC correlations of OH-4a with C-4a and C-12a, and OH-5 with C-5, indicated that the free hydroxy groups at C-4a and C-5 remained in 3, whereas the free hydroxy group at C-4 was missing. Key HMBC correlations from H-4 to the carbonyl carbon C-1 (δC 172.2) suggested that the oxygenated C-4 was connected to C-1 via an oxygen atom to form a lactone ring. Finally, HMBC correlations from H-4 to C-4a, C-5, and C-12a indicated the connectivity of C-4 and C-4a, thus completing the establishment of the planar structure of 3. The relative configuration of 3 was established by analysis of the NOE correlations. The NOE enhancement between H-12a and H-5 revealed their 1,3-diaxial positions and a half-chair conformation for ring B (Figure 3). The NOE correlations of H-4 with H-5 and H-12a indicated that the methine group C-4 was in an equatorial orientation in ring B while OH-4a was axial. The ECD spectra of four possible isomers were then calculated using the TDDFT-ECD method. The calculated curves for 4S,4aR,5S,12aS-3a ( Figure 5) at both CAM-B3LYP/TZVP and WB97XD/6-311++G(d,p) levels were in good agreement with the experimental spectrum, thus determining the absolute configuration of 3 to be 4S,4aR,5S,12aS. Saccharothrixone H (4) was determined to be a diastereoisomer of 3 based on their identical molecular formula (C24H24O11) and their similar NMR data. This was verified by HMBC correlations (Figure 2). In addition, similar correlations ( Figure 3) observed in the ROESY spectra of 4 revealed that the relative configuration of ring B in 4 was also identical to that of 3. This was suggestive of a 4epimer of 3. The calculated ECD spectrum of 4R,4aR,5S,12aS-3c fit well with the experimental spectrum of 4. Therefore, the structure of 4 was assigned and named saccharothrixone H.
Saccharothrixone I (5) was obtained as a white powder. The HRESIMS data established its molecular composition as C24H24O11, identical with the molecular formulae of 3, 4, and saccharothrixones B (8) and C (9) [12]. The UV, 1 H NMR, and 13 C NMR spectra were very similar to those of 8 (Table S3) except for the chemical shifts of the protons and carbon signals in rings A and B, indicating that 5 was a diastereoisomer of 8. The 2D NMR data of 5 ( Figure 2) confirmed it had the same planar structure as 8. The ROESY correlation between H-4 and H-12a indicated a 1,3-diaxial interaction. ROESY correlations of H-5 with H-4 and H-12a illustrated an equatorial orientation of the methine group C-5 in ring A, and thus an axial position for OH-4a. Therefore, the relative configuration of ring A in 5 was determined to be identical to that of 8, suggesting that 5 is a 5-epimer of 8. The ECD spectrum of 5 exhibited negative Cotton effects (CEs) at 262 nm and between 299 and 350 nm, and a positive CE at 244 nm. The observed negative CE around 262 nm ascribed for the n→π* transition of the α,β-unsaturated γ-lactone [16], was opposite to that of 8 ( Figure S1), indicating that Saccharothrixone H (4) was determined to be a diastereoisomer of 3 based on their identical molecular formula (C 24 H 24 O 11 ) and their similar NMR data. This was verified by HMBC correlations (Figure 2). In addition, similar correlations ( Figure 3) observed in the ROESY spectra of 4 revealed that the relative configuration of ring B in 4 was also identical to that of 3. This was suggestive of a 4-epimer of 3. The calculated ECD spectrum of 4R,4aR,5S,12aS-3c fit well with the experimental spectrum of 4. Therefore, the structure of 4 was assigned and named saccharothrixone H.
Saccharothrixone I (5) was obtained as a white powder. The HRESIMS data established its molecular composition as C 24 H 24 O 11 , identical with the molecular formulae of 3, 4, and saccharothrixones B (8) and C (9) [12]. The UV, 1 H NMR, and 13 C NMR spectra were very similar to those of 8 (Table S3) except for the chemical shifts of the protons and carbon signals in rings A and B, indicating that 5 was a diastereoisomer of 8. The 2D NMR data of 5 ( Figure 2) confirmed it had the same planar structure as 8. The ROESY correlation between H-4 and H-12a indicated a 1,3-diaxial interaction. ROESY correlations of H-5 with H-4 and H-12a illustrated an equatorial orientation of the methine group C-5 in ring A, and thus an axial position for OH-4a. Therefore, the relative configuration of ring A in 5 was determined to be identical to that of 8, suggesting that 5 is a 5-epimer of 8. The ECD spectrum of 5 exhibited negative Cotton effects (CEs) at 262 nm and between 299 and 350 nm, and a positive CE at 244 nm. The observed negative CE around 262 nm ascribed for the n→π* transition of the α,β-unsaturated γ-lactone [16], was opposite to that of 8 ( Figure S1), indicating that the configuration of C-5 in the lactone ring B of 5 was opposite to that of 8. This was further confirmed by the TDDFT-ECD calculation, which showed good agreement of the calculated spectrum for 4S,4aR,5R,12aR-5a ( Figure 6) with the experimental curve. Therefore, the absolute configuration of 5 was assigned as 4S,4aR,5R,12aR.
Mar. Drugs 2018, 16, x FOR PEER REVIEW 6 of 10 the configuration of C-5 in the lactone ring B of 5 was opposite to that of 8. This was further confirmed by the TDDFT-ECD calculation, which showed good agreement of the calculated spectrum for 4S,4aR,5R,12aR-5a ( Figure 6) with the experimental curve. Therefore, the absolute configuration of 5 was assigned as 4S,4aR,5R,12aR. Figure 6. Comparison of experimental curve of 5 and calculated ECD spectra for 4S,4aR,5R,12aR-5a and 4R,4aS,5S,12aS-5b.
Compound 6 has the molecular formula C23H20O11, one CH2 unit less than Tcm X (7), as determined by HRESIMS and NMR data. The NMR data of 6 were similar to those of 7 except for the absence of one methoxy group signal in 6. The HMBC correlations from the O-methyl protons at δH 3.80, 4.01, and 3.56 to C-3 (δC 174.8), C-8 (δC 159.3), and C-12a (δC 89.0), respectively, located the methoxy groups at C-3, C-8 and C-12a, suggesting that the methoxy group at C-13 in 7 was absent in 6. Therefore, the structure of 6 was determined to be 13-de-O-methyltetracenomycin X.
The identification of compounds 1 and 2 supports our previously proposed biosynthetic pathway for the ring-B cleaved tetracenomycin derivatives saccharothrixones A-C [12]. In that biosynthetic pathway, saccharothrixones A-C are derived from the intermediate 2 and its epimer of C-5, which undergo an intramolecular nucleophilic addition from OH-5 to 12-oxo group and simultaneous cleavage of ring B. Similarly, the nucleophilic addition from OH-4 to 1-oxo group of 1 would result in a divergent pathway committed to the formation of saccharothrixones G (3) and H (4) (Scheme S1) [12]. The coisolation of the different C-4, C-5, and C-12a epimers of secotetracenomycins indicated that the cleavage of rings A and B by the intramolecular nucleophilic addition was not stereocontrolled.
In a previous study [12], we reported that Tcm X and its isomer, saccharothrixone D, showed moderate cytotoxicity (5.4-20.8 µM) against the HepG2, MCF-7, and K562 human cancer cell lines, whereas the B-ring-cleaved derivatives saccharothrixones A-C were inactive at a concentration of 100 µM. To further evaluate the structure-activity relationship of these tetracenomycin congeners, we examined the cytotoxicity of compounds 1-6 against the cancer cell lines mentioned above. All the compounds were found to be inactive at 100 µM. The action mechanism of tetracenomycins was assumed to be intercalation with DNA, requiring flat structural moiety to move between the base pairs [17]. These results confirmed that the naphthacenequinone chromophore and the planarity of the molecules are vital for their cytotoxicity. The cleaved naphthacenequinone chromophore with a large substituent (ring A) in saccharothrixones A-C and G-I (3)(4)(5) probably blocks the intercalation with DNA, causing loss of cytotoxicity. The replacement of the ketone group by the hydroxy group at C-5 in compounds 1 and 2 changes the planarity of ring B, which could lower the effectiveness of intercalation. Compound 6, which differs from Tcm X only at the C-13 substituent (carboxy vs. methoxycarbonyl), showed no effect at 100 µM, indicating that the free carboxy group caused a significantly negative interaction with the DNA. Rohr et al. previously reported that tetracenomycin derivatives with a free hydroxy group at C-8 (elloramycinone) or C-12a (Tcm C) were less active than Tcm X, which has methoxy groups at C-8 and C-12a [17]. Our results further confirmed the Figure 6. Comparison of experimental curve of 5 and calculated ECD spectra for 4S,4aR,5R,12aR-5a and 4R,4aS,5S,12aS-5b.
Compound 6 has the molecular formula C 23 H 20 O 11 , one CH 2 unit less than Tcm X (7), as determined by HRESIMS and NMR data. The NMR data of 6 were similar to those of 7 except for the absence of one methoxy group signal in 6. The HMBC correlations from the O-methyl protons at δ H 3.80, 4.01, and 3.56 to C-3 (δ C 174.8), C-8 (δ C 159.3), and C-12a (δ C 89.0), respectively, located the methoxy groups at C-3, C-8 and C-12a, suggesting that the methoxy group at C-13 in 7 was absent in 6. Therefore, the structure of 6 was determined to be 13-de-O-methyltetracenomycin X.
The identification of compounds 1 and 2 supports our previously proposed biosynthetic pathway for the ring-B cleaved tetracenomycin derivatives saccharothrixones A-C [12]. In that biosynthetic pathway, saccharothrixones A-C are derived from the intermediate 2 and its epimer of C-5, which undergo an intramolecular nucleophilic addition from OH-5 to 12-oxo group and simultaneous cleavage of ring B. Similarly, the nucleophilic addition from OH-4 to 1-oxo group of 1 would result in a divergent pathway committed to the formation of saccharothrixones G (3) and H (4) (Scheme S1) [12]. The coisolation of the different C-4, C-5, and C-12a epimers of seco-tetracenomycins indicated that the cleavage of rings A and B by the intramolecular nucleophilic addition was not stereocontrolled.
In a previous study [12], we reported that Tcm X and its isomer, saccharothrixone D, showed moderate cytotoxicity (5.4-20.8 µM) against the HepG2, MCF-7, and K562 human cancer cell lines, whereas the B-ring-cleaved derivatives saccharothrixones A-C were inactive at a concentration of 100 µM. To further evaluate the structure-activity relationship of these tetracenomycin congeners, we examined the cytotoxicity of compounds 1-6 against the cancer cell lines mentioned above. All the compounds were found to be inactive at 100 µM. The action mechanism of tetracenomycins was assumed to be intercalation with DNA, requiring flat structural moiety to move between the base pairs [17]. These results confirmed that the naphthacenequinone chromophore and the planarity of the molecules are vital for their cytotoxicity. The cleaved naphthacenequinone chromophore with a large substituent (ring A) in saccharothrixones A-C and G-I (3)(4)(5) probably blocks the intercalation with DNA, causing loss of cytotoxicity. The replacement of the ketone group by the hydroxy group at C-5 in compounds 1 and 2 changes the planarity of ring B, which could lower the effectiveness of intercalation. Compound 6, which differs from Tcm X only at the C-13 substituent (carboxy vs. methoxycarbonyl), showed no effect at 100 µM, indicating that the free carboxy group caused a significantly negative interaction with the DNA. Rohr et al. previously reported that tetracenomycin derivatives with a free hydroxy group at C-8 (elloramycinone) or C-12a (Tcm C) were less active than Tcm X, which has methoxy groups at C-8 and C-12a [17]. Our results further confirmed the importance of the methylation of the polar hydroxy and carboxy groups for their cytotoxic activity. In additions, 1-6 were also evaluated for antibacterial activity, but were found to be inactive (MIC > 64 µg/mL). data, Table 2
Biological Assays
The cytotoxicities of the tested compounds against the human cancer cells HepG2 (hepatocellular carcinoma), MCF-7 (breast adenocarcinoma), and K562 (leukemia) were evaluated by the sulforhodamine B (SRB) assay as described previously [12]. The antibacterial assay was performed by using the agar dilution method [12].
Conclusions
Six new tetracenomycin derivatives including three seco-tetracenomycins were isolated from the rare marine-derived actinomycete Saccharothrix sp. 10-10. Saccharothrixones G (3) and H (4) are the first examples of tetracenomycins featuring a novel ring-A-cleaved chromophore. Saccharothrixone I (5), together with previously identified saccharothrixones A-C from the same cultures, are the only seco-tetracenomycins with a cleaved ring-B skeleton isolated from microbial natural products. This finding not only adds diversity of tetracenomycins, but also provides evidence for tetracenomycin-related polyketides biosynthesis. The structure-activity relationship study indicated that the planarity of the chromophore and methylation of the polar carboxy groups are important for tetracenomycin cytotoxicity. | 7,036.4 | 2018-09-20T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Evaluation and Modification of the Block Mould Casting Process Enabling the Flexible Production of Small Batches of Complex Castings
In the current literature about casting processes, the block mould casting is hardly ad‐ dressed although this process has numerous global applications. Almost all metallic dental implants are manufactured using this process [1-3]. This method is also regularly used in the jewellery industry [4]. The block mould casting process is particularly important for manu‐ facturing metallic foams since it is one of the few process routes for producing cellular struc‐ tures enabling uniform, open pored foams to be reproduced [5]. As the largest global producer of metallic open pored foams, the company ERG also uses the block mould casting process but rarely communicates details of the casting process. Due to the high degree of freedom the block mould casting process is very suitable for the production of bio-inspired technical devices [6].
Introduction
In the current literature about casting processes, the block mould casting is hardly addressed although this process has numerous global applications.Almost all metallic dental implants are manufactured using this process [1][2][3].This method is also regularly used in the jewellery industry [4].The block mould casting process is particularly important for manufacturing metallic foams since it is one of the few process routes for producing cellular structures enabling uniform, open pored foams to be reproduced [5].As the largest global producer of metallic open pored foams, the company ERG also uses the block mould casting process but rarely communicates details of the casting process.Due to the high degree of freedom the block mould casting process is very suitable for the production of bio-inspired technical devices [6].
The broad objective of this chapter consists of markedly shifting the focus of designers and manufacturers of castings to the block mould casting process.In conjunction with Rapid-Prototyping patterns, this casting method enables extremely complex castings to be variably and flexibly manufactured to their final near net shape [1,7,8].A definite structuring of the pattern's surface is transferred to the casting's surface as a consequence of this method's very accurate reproduction whereby flows around the casting can be optimised in a functionally integrated way [4,9].Moreover, this chapter should provide the user with the possibility of optimising the block mould casting process with the aid of the depicted test results.As a consequence of the firing process, cracks can be initiated in the mould by means of which casting defects occur to the point of mould leakage.By means of optimising the mould material's water content and temperature, the mixing duration, the firing temperature and also by adding supplements, the tendency for mould cracking is minimised by using consummate mould material manufacturing.Tests to elevate the cooling rates of the block mould casting process provide improvements in the mechanical properties of the cast metallic components.
State of the art 2.1. Classification of the Block Mould Casting Processes
The block mould casting process ranks among the precision casting methods in which the better known investment casting is also included.This is probably the reason why it is frequently misleadingly referred to in the literature as investment casting process.These two methods both employ the lost pattern technique.The patterns are either melted or burnt out of the mould after the moulding material has cured and are thus subsequently no longer available for mould manufacturing [1,7,10].This contrasts with, for example, the widely used sand casting process, in which multiple use patterns are employed.These patterns are parted in order that they can again be used after forming the positive impression in the sand mould.Since the patterns of precision casting methods are removed from the mould by means of melting or vaporising, they do not have to exhibit mould parting.Owing to this, very complex, final near net shaped casting geometries can also be produced which possess undercuts.Apart from these advantages, precision castings exhibit a very low surface roughness compared to sand cast components which can also considerably reduced the castings' machining [4].The difference between investment and block mould casting processes arises in the moulding material used and therefore on the mould.The patterns for the investment casting process are dipped into a ceramic slurry and subsequently sanded using ceramic granules.After the slurry has dried, this procedure is repeated until up to approx.8 to 13 layers exist on the pattern.To manufacture a block mould, the pattern is directly embedded in a ceramic slurry where gypsum-or phosphor-bonded investments are mainly employed as the mould material [1,11].Thus in comparison to the investment casting process, a great deal of process time and expenditure can be saved during the block mould casting process.In addition to this, the block mould's castings are easier to demould.
Process Steps
Figure 1a) schematically depicts the sequences of the block mould casting process.The elements of the pattern are produced by injecting wax into a matrix, usually made of aluminium or steel, and are soldered with the help of bee-glue.Depending on the component to be cast, a wax base or a plate is soldered at a wax chute.If a large number of small castings are to be produced, it is sensible to fasten the patterns of the castings onto a wax base.For the production of larger castings or metal foams, a wax plate is suggested since boxes of perforated steel cuvettes are used.This cuvette stabilises the ceramic moulding material, especially during the firing process.Without this reinforcement, the mould would be damaged because of phase transitions in the moulding material which occur during firing (figure 1b).The perforated steel cuvette is masked by plastic foil; the base is closed by a rubber plug.The inner area of the plug is filled with liquid wax via the steel cuvette and the soldered wax cluster is fixed in it.After the solidification of the fixing wax, the steel cuvette is filled with liquid ceramic slurry.The mixing of the moulding material and the filling of the steel cuvette should be carried out inside a vacuum chamber to lower the gas content in the slurry.The gas would otherwise precipitate during the moulding material's setting and lower the strength of the block casting mould.In addition to this, the surface quality of the casting will be reduced [12,13].The gas can also be removed from the gypsum by applying a vacuum to the moulds for some minutes after they have been filled with the slurry.In this case, the vacuum treatment should be completed prior to the start of the moulding material's setting process.After a drying period, which depends on the moulding materialused, the block casting moulds are usually dewaxed in an oven at 110 °C to 150 °C.When the moulds are completely dewaxed, the firing process is started in which heating and holding steps are adapted to the moulding material used to prevent cracking due to a too rapid heating rate.After completion of the firing process, the casting temperature is adjusted and held for at least three to four hours.Depending on the fineness of the component to be cast, a vacuum can be generated in the mould to assist the mould filling capacity of the melt (step 7, figure 1).In this way, component cross-sections smaller than 500 µm can be filled such as those which exist in, for example, open-pored metallic foams [11].
Another possibility for supporting the mould filling is to centrifugally load the block mould during the casting process.By maintaining the mould's rotational speed during the solidification phase, this also expedites the supply and the deposition of gases and impurities at the interface between the mould and the casting [1,2].After the mould has cooled down, the casting can be recovered by water jet or mould solving agents and removal from the casting system by a saw.
The quality of the mould, and therefore that of the casting, is decisively determined by the moulding material's mixing and by the firing process (steps 4 and 6, figure 1a).The mould tends to form cracks (figure 1c) during the firing process which can produce casting defects [3,4,14].As a worst case, the liquid metal does not remain within the cavity but runs out of the mould through the cracks.In order to minimise the tendency for the block casting mould to crack, the moulding material's manufacture is generally optimised with respect to the water's temperature and quantity, mixing duration and the addition of supplements.
Since ceramic moulding compounds result in low cooling rates during solidification, it is, moreover, expedient to implement measures to elevate the cooling effect of the block mould in order to improve the mechanical properties of the metallic components since these are closely connected with the casting's cooling conditions.
Moulding and Casting Materials
A basic dilemma arises when a moulding material used in the lost mould process is chosen and prepared.The moulding material should exhibit a certain strain to withstand the load during the mould's handling, thermal stresses and the load due to the casting process.After the mould has cooled down it should also exhibit low strength in order to easily remove the casting from the mould [3,14].
Besides these basic aspects, other requirements are imposed on the moulding materials for the block mould casting process [15]: Block mould castings process's moulding materials consist of a refractory material, usually quartz, cristobalite, or a mixture of both, and a binder whereby gypsum, phosphate, metallic oxides or silicates are used.The choice of the moulding material for producing a block casting mould mainly results from the casting temperature of the material which has to be cast [16].
For aluminium and silver alloys as well as nickel-chrome alloys, a moulding material with 25 to 30 wt.% gypsum and 70 to 75 wt.%silicon oxide is used because gypsum-boned investments (GBI) exhibit good ability to collapse after casting [17].
In the past, the gypsum-bonded moulding material consists of a mixture of silica and plaster of Paris.Over the last few decades, this material has been modified by the addition of boric acid, pigments and reducing agents to, among other things, increase the strength [14].Gypsum is produced from the sedimentary gypsum rock, whose prismatic crystals are bonded by water molecules.By treating in an autoclave, the gypsum is transformed into the unstable hemihydrate state, which needs water to restore the natural dehydrate state.When the GBI is mixed with water, branched gypsum crystals are formed which bond the refractory of the moulding material [4].
Although different statements are made in literatureabout the thermal stability of the binder, experts and producers of GBI agree that the gypsum's decomposition does not start in the temperature interval of 650 °C to 700 °C, which are common maximum firing temperatures for GBI [18,19].The GBI moulds should not be heated above 700 °C to 750 °C because the residual carbon from the pattern's wax then reacts with the gypsum from the moulding material.This reaction results in sulphur dioxide which decreases the surface quality of castings and,in the case of gold components, the mechanical properties [19].
High melting point alloys are cast in phosphate-bonded investments (PBI) consisting of 75 wt.% to 90 wt.% silica (quartz or cristobalite) and magnesium-or ammonium-magnesiumphosphate.This phosphate is formed by the reaction of magnesium with monoammonium in water during the mixing process.The firing process causes water loss, crystallisation and recrystallisation of magnesium phosphates, and forms fused glass, which lends high strength to PBI block casting moulds [12,16].
The high strength of block casting moulds in the green and fired states is the biggest advantage of PBIs.At high temperatures, the strength and surface quality of the moulds are decreased due to a thermal decomposition of the binder, especially at casting temperatures above 1375 °C [14,16].
Phosphate-bonded moulding materials are used for a broad spectrum of materials, e. g. gold, titanium, nickel, chrome and cobalt-chrome-alloys [20,21].
The strength of gypsum or phosphate-bonded block casting moulds depends on a multiplicity of influencing factors, such as the binder, additives, storing and setting environment as well as the firing temperature.It has to be taken into account, that porosity in the ligated moulding material considerably influences the strength of the polycrystalline brittle block casting mould [14,21]
Aspects
The block mould casting process exhibits several advantages which, when combined, cannot be found in other casting processes.Using the lost patterns casting process, ambitious geometries possessing undercuts can be realised with high dimensional accuracy, reproductive ability, and surface quality resulting in little finishing effort.The finished casting surface quality depends on the surface energy of the melt and on the interface energy between the melt and moulding materials.In this context, the composition and the purity of the melt as well as the reaction of the melt with the furnace chamber's atmosphere and the moulding material is important [17].
In conjunction with rapid-prototyped patterns the block mould casting process enables realtime production of metallic prototypes or small batches of castings.Using the ideal moulding material, nearly all casting materials can be processed using this casting method, whereby the process time is much shorter compared to the more time-consuming investment casting process [6,8,22,23].
On utilising gypsum or phosphate-bonded investments, process specific disadvantages result.The GBIs or PBIs have not until now been in continuous use.This raises the price of the moulding material and limits the economic batch size at around 1,500 castings, depending on the size and geometry of the components [23].Prior to their mixing with water, the moulding material exists as a powder, which promotes exposureto health problems in the workplace.The moulds of the block mould casting process exhibit low heat conductivity and high heat capacity.For this reason, the current cooling rate is low whichresults in unfavourable casting characteristics and low mechanical properties.Subsequent to their firing, block moulds exhibit a lower strength compared with other moulding materials.This limits the casting weight to approximately 10 kg.In addition to this, their low strength can lead to cracks in the moulds, decreasing the quality of the castings [23].Regarding the economy of the block mould casting process, it can be concluded that this casting process is the more economical the more complex the geometry is and the smaller the dimensions of the potential cast components are.
Influence of sodium chloride and sodium fluoride on gypsum-bonded investment's green and fired strengths
The tendency of block moulds to form cracks has to be reduced to increase the casting component's quality (see chapter 2.2).This can be achieved by optimising the moulding material's processing and the firing process by using additives.A typical additive is sodium chloride to lower the thermal expansion of the moulding material [24].Information is rarely given about the general effect of sodium chloride on the GBI's compressive strength, and especially about the interaction of the sodium chloride and sodium fluoride.Owing to this, the effect of sodium chloride and sodium fluoride on the strength of a GBI after firing was investigatedin the following sections.
Materials and methods
For the examinations, GBI specimen were produced containing 1 wt.%, 3 wt.%and 5 wt.% commercial table salt (based on the weight of the mixing water) with and without sodium fluoride.Higher salt contents were not used because they lead to foaming of the GBI, which decrease the handling of this moulding material.As a reference, GBI specimens without salt and only with sodium chloride were produced, respectively.The dimensions of the specimens conformed to the specifications in DIN EN 993-5.To produce the specimens, a silicon matrix was used.The particular amount of salt was dissolved in the mixing water for one minute prior to beginning the mixing process.After adding the moulding material, Goldstar xXx from Goldstarpowders (see table 1and figure 2 for the chemical composition), to the water, the composition was mixed for three minutes and then poured into the silicon matrix.45 Minutes after the pouring, the specimens were removed and were fired after an additional 60 minutes.The furnace program was identical to that used for the dewaxing and firing of block moulds made from the same GBI (table 2).The fired specimens were compressed using an Instron 8033 testing machine at a cross head speed of 2 mm/ min.At least four specimens were tested for each test run.
Experimental results
The influence of sodium chloride and sodium fluoride is shown in figure 3. GBI specimens with 1 wt.% sodium chloride and without fluoride have an increased compressive strength by up to 10 % in comparison with the GBI specimen without additives.With a 1 wt.% mixture of sodium chloride and sodium fluoride, the strength can be increased by about 55 %.
With the addition of these two salts, the gypsum binder reacts with sodium chloride and sodium fluoride according to the following equations [25]: Because the strength in the green state of the GBI cannot be increased by a salt addition (see chapter 4), it is assumed that the effect can be attributed to the influence of the salt on the formation of sinter phases during the firing process.The decrease in strength can be explained by the increased setting time, which can be observed when GBI has a high salt content [26].With an increased setting time, the gypsum dendrites start to coarsen due to the Ostwald maturation.This causes lesser cross-linking of the gypsum crystals.The effect of sodium fluoride cannot be explained by means of data in the literature.Science and Technology of Casting Processes
Influence of water temperature and content, mixing duration and quantity of salt with fluoride on the gypsum's compression strength in its green and its fired states
The quality of castings processed with the help of the block mould casting process depends to large extent on the quality of the block mould it self (see chapter 2.2 and 2.3).It is only possible to produce defect-free casting structures when the block mould exhibits strength adequate to withstand the crack initiation during the mould's production.To improve demoulding of the casting with the help of water, the moulding material should show a low green strength after casting.For this purpose, the experiments in this section were conducted to optimise the strength of the used GBI.
Experimental design
The experiments focused on the effect of the water temperature, water content, mixing time and salt addition (sodium chloride + sodium fluoride) on the green strength and strength of a GBI.The experiments were conducted with the help of Taguchi's method.This method covers the design of experiments according to statistical aspects, the assembling of models as well as the optimisation of the process.With the help of Taguchi arrays, only a few measurements are necessary compared to a complete factorial design.To determine the effect of the focused parameters, a L9 orthogonal array was chosen.This array allows the monitoring of 4 parameters with 3 settings, to detect non-linear relationships.Each parameter exhibits two degrees of freedom with three possible settings.These lead to a total of eight [= 4 x (3-1)] degrees of freedom.This number agrees with the demand of Taguchi that the total degree of freedom of the chosen array should be larger or equal to the number of total degrees of freedom for all experiments.In table 3, the L9 array is shown with the particular test settings.The settings of the water, water content and mixing time parameters were specified with the help of the literature and the specifications of the GBI manufacturer.With the help of the results from chapter 3, the salt content (sodium chloride + sodium fluoride) was determined.The test runs are interpreted with the aid of the analysis of means (ANOM) and the analysis of variance (ANOVA).The ANOM shows the optimisation direction of the factors.The mean deviance from the total average caused by every factor level indicates the main effect of every single factor level [27,28]: η ¯A2 = η ¯ij are the measured values and n is the total number of all trials (for L 9 = 9).
The effects E of the level changing are for the factor A: With the aid of the ANOVA, the statistical significance of an effect of a parameter on the command variable can be evaluated.For this, the total result is partitioned into single variances.The variance expresses the squared deviance of the particular average.
The calculation of the ANOVA is performed with the help of the following equations [27,28]: Calculation of a correction factor (CF) for an easier calculation of the error: Science and Technology of Casting Processes 96 N is the number of all trials including all repeatings (L9 array with three repeatings per trial
II.
Calculation of the total sum of squares
III.
After this the sum of the squared deviances of every factors is built, here exemplary for factor A: N A1 , N A2, N A3 are the number of trials with the parameter on level 1, 2 or 3.
IV.
Error sum of squares V.
Degrees of freedom ( f total and f parameter ) f A = number of levels of parameter A -1 = 3 -1 = 2 (9)
VI.
For the evaluation of the variances for each factor (for example A) V error = SS error f error (11) VI.
Calculation of the ratio of the variance and the error variance (for example A)
VIII.
Verifying the significance of the factors with the help of the F-Test.For this the calculated F value is compared with a tabulated F value.If the calculated one is bigger, the observed result is statistical significant.Within the Taguchi method the signal to noise ratio (S/N) represents the summed statistic.For this, every measured value of every test run is reweighted in a target function such that no repetition within one test run occurs and that the total degree of freedom is decreased.There are different target functions which can be chosen for the S/N-ratio depending on the quality attribute.If the aim is to minimise the number of trials (smaller-the-better-type), the S/N-ratio is calculated in the following way [27]: The maximisation of the S/N-ratio (lager-the-better-type) is Using the S/N-values, the ANOM and the ANOVA can be calculated in the same way; similar to the mean values.
Utilising the Taguchi method in this way aims at setting a process such that the target value achieves the desired maximum or minimum with low scatter.To verify the data resulting from Taguchi experiments, one should perform verifying experiments.
The value of the command variable η optimal , under optimal conditions, is calculated from the optimal parameter settings [28]: with η ¯ = overall mean of the target value The confidence interval for the approving experiment (CI confirmation ) and of the population (CI population ) are calculated by Science and Technology of Casting Processes
Production and testing of specimens
For the production of the compression specimen the corresponding amount of GBI (Goldstar xXx from Goldstarpowders, table 1 and figure 2) was mixed with max.825 g water with the help of a drill-stirrer.If salt was used for a test run, the salt was dissolved in the mixing water for 1 minute.After mixing, the slurry was poured in a silicon matrix to produce compression specimens according to DIN EN 993-5, and after 45 minutes the specimens were removed and fired (table 2).The compression tests were conducted using an Instron 8033 at a cross head speed of 2mm/min.For every test run, three and five specimens in the green and fired states were tested, respectively.
Experimental results
Table 5 summarises the results of the compression tests and the S/N-ratio for every test run.The aim of the presented experiments was to maximise and minimise the GBI's strength in the fired and the green states, respectively.That is, the larger and the smaller the S/N-ratios were selected which are the better in the fired state the better in the green state, respectively.
The results of the ANOM are presented in figures 4 to 7; tables 6 and 7 present the results of the ANOVA.Figures 4a) and 5a) and table 6 show that the water temperature and the mixing time have little effect on the green compression strength of the used GBI.This contrasts with the water and salt contents (figures 4b and 5b).From the literature, it is known that the strength of GBI decreases with increasing water content.Owing to its heterogeneous nucleation, gypsum spontaneously crystallises without being in its equilibrium condition even in the first stages of the hydration process.Excess water, which is important for the flowability of the moulding material, is not bonded after the rehydration of the gypsum and evaporates during setting leaving pores in the material.This microstructural defect considerably decreases the strength of the GBI.In addition to the pore formation, an increased water content leads to an increase in the gypsum's setting time, which produces a coarsening of the gypsum crystals due to Ostwald maturation.These crystals built a less dense network which cannot carry the magnitude of load that a better crosslinked gypsum crystal network is capable of [21,26,29].
On increasing the salt content from 0 wt.% to 1 wt.% severely decreases the green strength of GBI (figure 5b).An explanation of this result cannot be interpreted with the help of the literature.
In the fired state, all the varied parameters show an effect on the compression strength.The maximum strength is achieved with a mean water temperature, low water content, mean mixing time and a high salt content.
The effect of the water temperature on the GBI's compressive strength is similar in both the green and the fired states, but the impact is more pronounced in the fired state.The best water temperature for optimising the compressive strength lies in the range of 17 °C to 55 °C.A reason for this could be the change in the gypsum's solubility in water.When water reaches the temperature of approx.27 to 35 °C, the solubility of gypsum in water is decreased resulting in a lower amount of hydrated gypsum.This gypsum is hydrated between the start of mixing and the setting.Due to this, less gypsum crystals are formed and thus the strength of the moulding material is decreased [30,31].In contrast to the green state, the effect of different water contents on the compression strength of GBI is minor.The effect of pores, which are built because of excess water, is lower because probably some sinter phases are formed during the firing process.Table 7. ANOVA of the compression strength (raw data).The tabulated critical f-value is 3.32.
The impact of the mixing duration on the compressive strength shows the same tendency in the green and in the fired state.The strength rises from the beginning of the mixing process up to 180 seconds.After this peak, the strength decreases with continuous mixing.180 seconds mixing advances the hydration of the gypsum and leads to a higher strength after setting and burning, respectively, because more gypsum crystals are precipitated.However, once precipitated, the crystals are destroyed by the mixing.After mixing for the ideal duration, destruction of the crystals dominates and the resulting strength again decreases [26].
The influence of the salt content can be attributed to the formation of sinter phases during the firing process.
Confirmation tests
The specimens for the verification test runs were produced with the parameter settings summarised in table 8.In addition to this, the calculated and measured values for the reference specimens are presented.
The measured values are located within the calculated scattering bands and deviate only marginally from the calculated average values.Firstly, it can be concluded that the obtained Science and Technology of Casting Processes results have a major significance.Secondly, that there is no interaction between the parameters; otherwise the calculated values would differ more strongly from the measured results since these calculations are only based on the single effect of the parameters.It is interesting that there is no interaction between the water content and the mixing duration.Although the setting process is slowedusing increased water content, a mixing duration longer than 180 seconds in combination with high water contents has no unfavourable effect on the compressive strength of GBI after either setting or firing.Since the unfavourable effect of the mixing duration can be attributed to the destruction of gypsum crystals, it follows that, independent of the water content, a nearly equal amount of gypsum crystals are present after the mixing process.8. Setting of the parameters of the confirmation tests, the calculated strength for a confidence interval of 95 % and the measured strength.
Influence of glass fibre volume and fibre length on the strength of fired gypsum-bonded investments
When the compositions of several GBIs are examined it can be shown that some products contain glass fibres to increase the strength of the block mould.The reinforcement of GBI and the effect of glass fibres on the properties of GBI were the focus of several examinations [32][33][34][35][36].
The results were contradictoryregarding the effect of the glass fibres on the compressive strength of GBI.Due to this, the aim of the following analyses was to demonstrate the influence of the glass fibre volume and glass fibre length on the compressive strength of a GBI.
Materials and methods
Uncoated short glass fibres were used.Coated glass fibres could lead to both a chemical reaction and a bond between fibres and moulding material which exerts an adverse effect on the reinforcement [32].The maximum fibre content was held constant at 1.0 wt.% based on the weight of the moulding material.Larger amount of glass fibres could not be added because the viscosity of the slurry was otherwise too low.The mean glass fibre content was 0.5 wt.%.The glass fibre lengths employed were 3 mm, 6 mm to 12 mm.For the production of the compression specimens, 1500 g of the GBI Goldstar xXx from Goldstarpowders (composition see table 1 and figure 2) was mixed with the glass fibres and then with 675 g water for three minutes.The slurry was poured into a silicon matrix to produce compression specimens according to DIN EN 993-5.The specimens were removed and fired (table 2) after 45 minutes and 60 minutes, re- spectively.With the help of an Instron 8033 testing machine (cross-head speed = 2mm/min), the compressive behaviour of at least four specimens were determined.
Experimental results
The results of the compression tests are shown in figure 8.In general, the reinforcement of GBI specimens using glass fibres decreases the scatter of the compression test results.The weakening effect of the pores is attenuated, which are the main reason for the deviations of the measured compressive strength, independent of the volume and length of the glass fibres.The moulding material's compressive strength is only slightly attenuated by glass fibres with a length of 3 mm and a fibre content of 0.5 wt.%.An increase in the compressive strength of up to 25 % can be achieved by reinforcing the GBI using 1.0 wt.% of 6 mm and 12 mm glass fibres.This content and these lengths improve the resistance of the moulding material to the initiation of micro cracks and their growth.Following micro crack initiation due to loading the fibre-reinforced moulding material, the crack reaches the glass fibre and growths along the interface of the matrix and fibre; thus dissipating crack energy [32,36].
Influence of metal powder in the moulding material on the GBI's setting behaviour and its compressive strength as well as on the cooling behaviour, the metallographic and mechanical properties of an A356 (AlSi7Mg0.3) alloy
Despite several advantages, the block mould casting process exhibits some unfavourable characteristics (see chapter 2.4).One of the most severe problems is the low cooling rate of this casting process during the solidification of liquid metals.For example, this cooling rate amounts to approximately 0.1 K/s during the solidification of an A356 alloy at a casting temperature of 720 °C.In contrast to this, the high pressure die casting process with a similar alloy and wall thicknesses reaches cooling rates between 50 and 85 K/s [37].Owing to the low heat conductivity of the moulding materials used, the cast metal solidifies relatively slowly producing a coarse microstructure and a high dendrite arm spacing (DAS).These microstructural parameters lead to decreased mechanical properties of the metallic components [23,38].Besides the mechanical properties, the low cooling rate impairs the casting properties.Inferior casting properties lead to technical volume deficits in the casting; such as shrinkage or microporosity.Low cooling rates induce a segregation of elements in the melt adjacent to the solidification front thus promoting constitutional undercooling.This changes the solidification morphology with a tendency to produce exogenous mushy and endogenous papescent structures.These types of solidification morphologies decrease the feeding ability of the metallic melt, from which microporosity results [39].Methods for improving the cooling rate of the block mould casting process leads directly to an improvement in the casting and mechanical properties.
The presented trials aimed at increasing the cooling rate of the block mould casting process with the help of iron powder.Besides the iron powder affecting the cooling of the metal castings in the block moulds, the impact of the iron powder on the setting and strength of the moulding material is evaluated.
Materials and methods
To evaluate the influence of iron powder, compression specimens according to DIN EN 993-5 were produced.A specific amount of iron powder was mixed with 1500 g Goldstar xXx from Goldstarpowders (composition see table 1 and figure 2) and then introduced in 50 wt.%,55 wt.% and 60 wt.% water.After 3 minutes mixing, the liquid moulding material was poured into a silicon matrix.After 60 minutes, the GBI specimens were removed and, following another 60 minutes, the specimens were fired (table 2).At least three specimens per test run were compression tested at a cross head speed of 2 mm/min with the aid of an Instron 8033 testing machine.
To investigate the effect of applying a vacuum, some silicon matrixes, which were filled with liquid iron powder GBI mixture, were evacuated.
The effect of the metal powder in the moulding material on the cooling behaviour of an A356 alloy was measured with the aid of thermal analyses, tensile tests and metallographic sections.
To enable this, a wax assembly possessing four tensile specimen patterns was produced, whereby a type K thermocouple was integratedinto the middle of one tensile specimen per assembly.With the help of preliminary tests, a maximum (75 wt.%) and a mean (37.5 wt.%) iron powder content was specified.After embedding the patterns into the GBI (Goldstar xXx from Goldstarpowders, composition see table 1 and figure 2) mixed with iron powder, the block moulds were dewaxed, fired and cooled down to 200 °C.A pre-grain refined, not modified Ndegassed A356 alloy was cast into the block moulds using a casting temperature of 720 °C.After the melts had cooled down, the tensile specimens were tested.Specimens with integrated thermocouples were used to prepare metallographic sections.For this purpose, the corresponding samples were mounted using the embedding compound Araldit DBF together with the hardener Ren HY 956.The mounted samples were ground using different grades of abrasive paper and polished using VibroMet.Light microscopy images were taken at different magnifications using an Axio Imager A 1 m from the company Zeiss.
Experimental results
At the beginning of the investigations, it was attempted to determine the iron powder content at which the GBI would no longer set.It was possible to show that the setting process was never interrupted, irrespective of the iron powder content.Figures 9 and 10a) show the effect of metal iron powder on the setting time of the used GBI.In addition to the increase owing to a higher water content, the setting time increases with elevated iron powder content, yet the setting process is not interrupted.A reason for this cannot be found in literature.Besides the setting time, the influence of the iron powder on the compressive strength of the used GBI is shown in figures 9 and 10b).From the critical value (10 wt.% to 25 wt.%), the compressive strength of the GBI is increased independent of the specimen's water content.The maximum difference in the compressive strength of specimens with 100 wt.% and without iron powder content is more than 750%.With the aid of references in the literature, it is assumed that the iron decreases the melting point of the GBI's refractory.Due to this, fewer sinter phases are formed which considerably increase the strength of the moulding material [40].
The analysis of the vacuumed GBI specimens possessing different metal powder contents indicates that the iron powder is well distributed within the fired GBI specimens (figure 10b).
There is no segregation of the metal powder due to the underpressure.
Figure 11a) shows the effect of the iron powder in the moulding material on the cooling behaviour of an A356 alloy.Using the first derivative of the cooling curves, the liquidus temperature and the temperature at the end of solidification were detected.In combination with the respective times, the cooling rates achieved by the block moulds were calculated (figure 11b).The cooling rate of the block mould casting process can be increased by factor of 3.5 with the help of 37.5 wt.% iron powder in the moulding material.The addition of more iron powder exhibits no further increase.
The increased cooling rates are reflected in the dendrite arm spacing (DAS) of the A356 alloy.Owing to a 3.5 fold increase in solidification rate due to the iron powder addition, the DAS is decreased by about 33 % in comparison to specimens produced using moulds containing no iron powder (figure 11a).Since there is no changing in the amount of the cooling rate, no change in the DAS was found due to increasing the iron powder content from 37.5 wt.% to 75 wt.%.The effects of the increased cooling rate and the decreased DAS cannot be related to the mechanical properties of the A356 alloy (figure 12).According to the literature, the mechanical properties of those specimens produced with the aid of the modified moulds which promote a higher cooling rate should be better than the properties of those specimens cast using the unmodified moulds.In contrast to this, the tensile strengths and particularly the elongations to fracture of the specimens produced with the aid of the block moulds containing 37.5 wt.% iron powder are, in fact, worse compared to those of specimens cast in non-modified moulds.The tensile strengths and elongations to fracture of specimens from modified moulds can be increased if the iron powder content is raised to 75 wt.%.
The metallographic sections of the A356 alloy specimens show that these results can be attributed to iron phases in the aluminium-silicon matrix.When the mean content of the iron powder is employed in the mould, the alloy's microstructure exhibits the most iron containing phases.It appears that during the mould filling and solidification of the A356 alloy, iron was dissolved out of the mould by the aluminium melt.The solubility of iron in solidified aluminium is low, which promotes the precipitation of the iron phases during the solidification.These plate-like phases decrease the tensile strength and the elongation to fracture and also increase the yield strength (figure 12 and figure 13c).The lower yield strength and the lower fraction of iron phases in the microstructure indicate that the specimen cast in the mould containing 75 wt.%iron powder dissolved less iron in comparison to the specimen produced with the help of moulds containing 37.5 wt.% iron powder.This assumption could be explained by the higher strength of the moulds with 75 wt.%iron powder (figure 13b), since they dissolve less iron content due to mould erosion.
Influence of the maximum firing temperature and duration on gypsum-bonded investment's compressive strength
During the firing process, thermal stresses develop in the block mould due to the different thermal expansion coefficients of the moulding material's constituents.For example, in GBI bonded moulds the gypsum contracts during the firing process due to dehydration and the binder decomposes at elevated temperatures [14].The dehydration takes place at temperatures of around 128 °C whereby hemihydrate is formed, in which water is removed at 163 °C.Depending on the temperature, the resulting anhydrite exhibits three polymorphic states.At 200 °C, III-CaSO 4 (hexagonal) is transformed into II-CaSO 4 (orthorhombic).Above 1200 °C, I-CaSO4 (cubic) is formed [24].The fireproof basic material silica, mostly a mixture of cristobalite and quartz, shows a phase transformation at 220 °C to 275 °C and 573 °C, respectively.This transformation leads to an expansion of these phases.The isotropic transformation of cristobalite from the α-to β-modification takes place rapidly and causes a shearing moment between the refractory and the binder.Moreover, this moment can induce cracks in the mould.The expansion of quartz is anisotropic and can rapidly elevate the shearing moment caused by the cristobalite [14].
Independent of the crack formation due to the refractory's expansion, cracks can be initiated in the block mould by the expansion of the wax pattern during the dewaxing process or by a too rapid cooling after firing [14].
The tendency to decrease the crack initiation directly leads to a qualitative improvement of block mould cast components.For this purpose, the firing process was modified regarding the maximum firing temperaturein the followinginvestigations.
Materials and methods
The effect of the maximum firing temperature on the strength of GBI was analysed with the help of compression tests.For this, the moulding material Goldstar xXx from Goldstarpowders (composition see table 1 and figure 2) was used to produce compression specimens accordingly to DIN EN 993-5.The mixing procedure was conducted with respect to the results in section 4. Two types of specimens were produced: One with the best and one with the worst settings for the mixing parameters (table 9).
The general furnace program is summarised in table 2 with varying maximum burning temperatures.The highest furnace temperature was fixed at 950 °C to avoid a decomposition of the gypsum [14].The compression specimens were tested using an Instron 8033 at a cross head speed of 2mm/min.Table 9. Mixing conditions for the production of the GBI specimens.
Experimental results
Figure 14 shows the results of the firing experiments.By increasing the maximum firing temperature up to 950 °C, the compressive strength of GBI specimens is increased independent of the starting strength.A probable reason for this is that a greater quantity of sinter phases is formedat higher firing temperature which raises the strength of the moulding material.However, the scatter of the strength values is increased with elevated firing temperatures.In addition to this, there is a large difference between the strength of specimens produced using an optimised setting of the mixing parameters and that of other specimens.
Conclusions
The block mould casting process offers several attractive advantages which, when combined, only exist in this process.Besides this, high dimensional accuracy and surface quality of the potentially complicated casting result in low finishing effort.To produce high quality, block mould cast components, the tendency of the block mould to cracking has to be minimised.In the investigations presented here, different approaches were followed to modify the block moulds: 1.The addition of sodium chloride and sodium fluoride to the mixing water considerably increases the strength of the GBI block moulds.Regarding the handling of the GBI, a salt content of one weight per cent should not be exceeded.
2.
In the green state, only the water and the salt content have a statistically significant influence on the strength of the GBI.Increasing the water and salt contents decreases the strength of the GBI in the green state.Here, the water temperature and the mixing duration have no significant influence.
3.
The water temperature and the mixing duration exert a statistically significant effect on the fired state.The maximum strength is achieved using a water temperature of 17 °C, a low water content (here 45 wt.%), moderate mixing duration (here 180 s) and a high salt content of one weight per cent.
4.
The results depicted here allow the green strength of GBI to be minimised providing a better demoulding of the block mould cast components.In addition to this, the strength in the fired state can be optimised to avoid crack initiation in the block moulds.Salt (sodium chloride + sodium fluoride) increases the green strength and decreases the fired strength.
5.
One weight per cent of 12 mm glass fibres raise the strength of GBI.
6.
Iron powder increases the strength and the cooling rate of GBI.Owing to the best agreement obtained between erosion resistance and cooling rate, a powder content of 75 wt.% is suggested.To avoid the assimilation of iron by the aluminium melt, coating of the wax pattern it advised.
7.
With increased maximum firing temperature, the strength of GBI is raised, whereby a temperature of 950 °C should not be exceeded.
Figure 1 .
Figure 1.a) Basic steps of the block mould casting process.b) Fired block mould without stabilising cuvette.c) Block mould with cracks after the firing process.
flowability for good mould fillingprior to setting • high reproductive accuracy • fixation of the pattern • low affinity to volume changes during the setting and firing processes • no chemical reaction with the used casting material • thermal stability • short processing time • cost-effective • ecologically compatible Science and Technology of Casting Processes
Figure 2 .
Figure 2. X-ray diffraction analysis of the used GBI Goldstar xXx from Goldstarpowders.
Figure 3 .
Figure 3. Influence of sodium fluoride and sodium chloride content on the gypsum-bonded investment's compression strength.
Figure 4 .
Figure 4. Influence of a) water temperature and b) water content on green compression strength of gypsum-bonded investments.
Figure 5 .Table 6 .
Figure 5. Influence of a) mixing time and b) salt content on green compression strength of gypsum-bonded investments.
Figure 6 .
Figure 6.Influence of a) water temperature and b) water content on compression strength of fired gypsum-bonded investments.
Figure 7 .
Figure 7. Influence of a) mixing time and b) salt content on compression strength of fired gypsum-bonded investments.
Figure 8 .
Figure 8. Influence of short fibre content and short fibre length on compression strength of fired gypsum-bonded investment.
Figure 9 .
Figure 9. Influence of iron powder on the compressive strength and setting of gypsum-bonded investments with a) 55 wt.% water and b) 60 wt.% water.
Figure 10 .
Figure 10.a) Influence of iron powder on the compressive strength and setting of gypsum-bonded investments with 65 wt.% water.b) Influence of vacuum on the distribution of iron powder in the ceramic specimens.
Figure 11 .
Figure 11.Influence of iron powder on the cooling rate of block moulds casted with an A356 with a casting temperature of 720 °C and a mould temperature of 200 °C.a) Cooling curves.b) Calculated cooling rates and the DAS of the specimens.
Figure 12 .
Figure 12.Influence of iron powder in the block mould on the tensile properties of an A356.
Figure 14 .
Figure 14.Influence of maximum firing temperature on the compression strength of gypsum-bonded investments.
Table 2 .
Furnace program of the firing program.Evaluation and Modification of the Block Mould Casting Process Enabling the Flexible Production of Small Batches http://dx.doi.org/10.5772/5062193
Table 3 .
Orthogonal L9 Taguchi array.Evaluation and Modification of the Block Mould Casting Process Enabling the Flexible Production of Small Batches http://dx.doi.org/10.5772/50621
Table 4
summarises the settings of the parameters.If the experiments were to be conducted with the help of a complete factorial array 3 4 = 81 test runs would have to be performed.
Table 4 .
Parameter settings of the experiments.
Table 5 .
Results of the compression tests and calculated S/N ratios.
Evaluation and Modification of the Block Mould Casting Process Enabling the Flexible Production of Small Batches http://dx.doi.org/10.5772/50621 Evaluation and Modification of the Block Mould Casting Process Enabling the Flexible Production of Small Batches http://dx.doi.org/10.5772/50621 | 10,887.8 | 2012-09-26T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Theoretical modeling of the logical-procedural system organizations of the history of society
. This article focuses on the metaphysical foundations of educational philosophy. They are considered as the sources of future human modeling and the commission of certain actions. The article explores the evolution of these principles in the development of human civilization. The causality principle is considered as the entity's fundamental ontological characteristic. It suggests that a human being can realize his desire for freedom only by submitting his life to the universal objective law. In the causal perspective, any phenomenon is considered as the consequence of a cause and at the same time as the cause of some other consequence. The model of the world of primitive man could not and did not contain a picture of nature as a certain arrangement of phenomena, united by unified cause-and-effect laws. However, it does not mean that our primitive ancestors did not imagine the world around them without the categories of order, just in his mind, there was a different order.
Introduction
Epistemological standards, ideals, and norms of modern knowledge are reference points for socio-historical knowledge; generate the corresponding reflection of logical-processual history characteristics. Today, this concern to a maximum extent the history dynamics and development: the classical scheme revision of epistemological relation ("subject/object") implies the appeal to the integrity of the determinants of knowledge. The specificity of the vision of historical procedurality changes in such a way that "lifestyle", the anthropological " background knowledge " of modernity with its dialogics, the theme of action and responsibility, the ethics of discourse, the ethics of the human species are directly included in cognition and discursive practices [1].
The purpose of this article is to consider the question of the relation between free will and the normative order, as well as the possibility of constructing social reality and a single unified world. The article concentrates on the transition from the causal order to the normative one of the historical process throughout the 20th century. It states the hypothesis that in the 20th century the world has entered the era of controlled history, and consequently the causal order is rapidly being replaced by the normative order, in which the historical process is increasingly planned and predictable, where historical events and phenomena are more likely the embodiment of a preconceived plan of action [2].
In this regard, the socio-philosophical position to the epistemology of historical processes should be organized in a complex correlation of vital organic and conceptual-discursive characteristics, taking into account not only dialogical but also destructive programs and strategies to tradition [3].
Problem statement
Indeed, the cognitive significance of the principle of historicism, without which it is impossible to construct epistemology of the processes of history, first of all, consists of the requirement to consider nature and a social life that are continuous in time. And just as space and time implementation continuity has a complex and diverse character, the application of the principle of historicism requires both general and special patterns of the flow of time in nature and society. Due to the ontological inhomogeneity of time, which is found in the variety of material systems temporal properties, in each specific case, it is necessary to clarify the specifics patterns of transition from the old to the new.
However, within the epistemological field of the procedural organization of history itself, it makes sense to distinguish between the historical tendency of epistemology and the modern conception, which is all the more significant when the address to the difficult marking definition of modernity and epistemology subject and status "prejudices". In this case, first of all, we should take into account the fundamental importance of the "epistemological-turn" that is associated with the name of M. Foucault [4].
The question is in the most general terms of the extreme importance of interpretation as a basic epistemological procedure of humanitarian-historical cognition. The relationship between foundationalism and relativism acts as a distinguisher in contemporary epistemological debates. Consequently, a fundamental question arises about the possibility of both positions dialogue: if such a dialogue is not possible, then the problem of the historical integrity and a holistic view of historical processes will become even more acute. In such a complex situation, systemic analysis and consistently built typology of interpretations in socio-humanitarian cognition provide an opportunity to identify the theoretical epistemological dominant in humanitarian cognition, to substantiate the understanding of interpretative prerequisites and the heuristically -creative nature of interpretation [5].
Materials and methods
The methodological basis of the work is the principle of historicism, dialectical unity, and continuity of historical epochs. The method of comparative analysis of two ontological principles, normativity and casualty is used. The following scientific methods are also used: dialectical, historical and substratum, structural and functional, systemic and variantmodeling. Methods and principles of phenomenology, historical hermeneutics, philosophical dialogue, and principles of cultural-historical comparative studies are applied.
The continued increase in the concentration of capital on the planet led to a situation in the middle of the XX century when guided history became not only possible but necessary. Finally, for the first time, humanity approached the possibility of creating a United World without Political Borders or National Government. At the beginning of the 21st century, new technological possibilities have emerged, and the 6th technological way has begun to take shape when most of the physical labor will be performed by robots. This opportunity has led to the need to construct a new social structure. The need for the middle class has now disappeared. There is still a need for highly skilled work of a limited number of people. From that point on, the normative order began to dominate the casual order more and more. On the agenda was the creation of a single integrated world with a single world government, currency, legislation, etc. Humanity has come very close to the question of what kind of force will emerge as the architect and designer of this United World [6].
The development of the cause-and-effect series occurs as a smooth transition from one possible world to another. The act of freedom is a break in gradualism that irreversibly transports us into another world, immediately created by this very act. And here we are not talking about the cause, but rather the guilt. We are responsible for this transition, we have created this world and we are responsible for the fact that it now exists. At the same time, the guilt is not understood in the moral-evaluative sense, but in the ambivalent (metaphysical) sense, for the birth of good or evil out of our deeds is equally likely. Therefore, responsibility does not mean punishment, but the consciousness of one's active participation in the life, one's involvement in existence.
The systemic analysis of the specifics of the interpretation functioning in understanding the history processes allows keeping current the fundamentally important problems of epistemology. To expand the problem field of the history epistemology, we can say that consistent use of the principle of interpretation of the philosophical text of history and historical processes can be creatively used to understand the ideas of philosophical dialogue at the level of cultures and national traditions, that is important to establish the understanding of cultures and today's world communities. The categorial and methodological analysis of ideal and real objects of socio-philosophical understanding of history gives the possibility to set the problem of responsibility ("science ethics"). Respectively, the deed theme introduces the responsible and constructive representation of the tendencies of development of modern subjectivity [7].
Thus, it makes sense to analyze the logical-process epistemology of history in such a way as to simultaneously distance ourselves from the extremes of "foundationalism" and "relativism". On the discursive terms, there is a move toward value-oriented strategies of understanding; it implies growing attention to intertextual programs of interpretation. The scientific project we substantiate together represents an essential invariant of the historical processes epistemology based on hermeneutics -dialogical constructions. This makes it possible to upbuild a unifying perspective that modern social philosophy lacks -to understand epistemology as a directly realized experience of the formation and genesis of historical knowledge. As a result, we can speak not only of epistemological programs dialogue within the meaning of the historical processes logic but also of the well-known productivity of the "conflict of interpretative schemes": the current strategy of modern semantic genesis while preserving the universal structures of cognition and its value intentions is constructed. Epistemology turns out to be the connecting element that holds the space of cognition in unity -thereby social philosophy and the history philosophical epistemology regain the generic beginning, modern philosophy is impossible without it [8].
A "trans-epistemological discourse" of the processes of history is formed: in addition to the formational and linear understanding of history, the facts that "fall through" the classical text of the history philosophy are analyzed: (organic matter, body, landscape, other some). Understanding the history processes, within the discursive analysis of social philosophy, is oriented in a reflexive-topological way. It is thus worth considering that the newest epistemology is "ontological" ("eventful") oriented. It is no coincidence that there has been increasing attention not only to the legacy of the "organic school" of history but also to postclassical ontology in the field of philosophy. (A. Badiou, A. Bibikhin, V. Podoroga), which inherits a sustained interest in being-data throughout the XX century (A.F. Losev, G.G. Shpet, M. Heidegger, A.N. Whitehead, G. Bachelard) [9].
Results and discussion
To understand the modern historical processes epistemology it is important to keep in mind the transitional positions and "transformed forms" of socio-philosophical knowledge -those positions that occupied a borderline or peripheral place in classical epistemology. After all, in its extreme manifestations the ultimate "subjectivism" can be conceptualized as "objectivism" or "positivism. What is required, therefore, is an epistemology of the phenomenon of modernity itself. The definitions of virtual reality as an epistemological project of the "project of the future" are important. In any case, it is necessary to take into account classical and post-classical strategies of formation of historical epistemology in the sense that it includes genetically different universals and constants of historical knowledge today.
Thus, it can be argued that the specificity of social subjectivity vision, as well as individual regions of history, micro-historical formations, significantly change when the anthropological "background knowledge" of the modernity is directly included in the cognition with its dialogical character, the action theme, and responsibility, discourse ethics, "applied" themes of understanding the problems of genetic engineering, anti-psychiatry, euthanasia -everything that can be called the new presence of history. It is these "throw-ins" of vital-existential material into the space of philosophical reflection of history that "make" modern philosophy of historical process. This is more important because the topological reflection is the actual form of thinking in the space in which there is an interlacing of attitudes and meanings. (V.V. Savchuk. Topological reflection. M., 2012). Thus, a critical "epistemology of epistemology" is needed, taking into account not only dialogical and constructive but also destructive programs and strategies to historical tradition -this may be relevant to the understanding of history and historical knowledge in general.
It should be recognized that the general epistemological space of modernity is in a situation of contradiction and destruction of classical boundaries and rubrics. Therefore, it makes sense to distinguish between the tendency of historical epistemology and modern concepts of historical epistemology. This is especially significant when it makes sense to distinguish between the historical tendency of epistemology and the modern conception, that is all the more significant when the address to the difficult marking definition of modernity and epistemology subject and status "prejudices" -the term itself requires consistent categorical analysis and conceptualization (in this case we should take into account the fundamental importance of the "epistemological -turn" associated with the name of M. Foucault). In other words, a new methodology should be applied to a new sociality and a new historical material. Otherwise, it will be impossible to define at least general perspectives of contemporary social dynamics and, even to formulate the best option for the realization of meaningful perspectives of historical existence. Last but not least, this applies to the existence of contemporary communities, which have an increasing impact on the "big society" [10].
It should first be a recognized fact that such definitions as "organics of the background knowledge" and with it, most essentialist definitions of social matter, end up on the list of primordialism, whose positions on many points are fundamentally displaced by social constructivism. This has its explanation, which consists not only in the idea of the "operational" of history but also in the social sense of the danger of organic images of social existence, which is associated with the creation or simulation of the supremacy of social body threat. In this view, Europe, for example, may seem like an island attached to the grandiose body of an entirely different empire: the Mongol empire. In that view of the world, Europe, for example, may seem to be an island attached to the grandiose body of an entirely different empire: the Mongol Empire. It is worth pointing out a remark on a similar occasion: Europeans who first saw Mongolia imagined that the center of the world is not Rome or Louis` chambers, but "invisible Asian" -later Russia performed this function (T.V. Igosheva «Aperture, stretched into Hell ..." or Hell music in the poetic cycle of N.Zabolotsky "Roebroeck in Mongolia" // N.A. Zabolotsky: pro et contra. The writers, critics and researchers` opinions of N. Zabolotsky`s personality and works: Anthology. -SPb., 2010. pp. 860-861). N.Ya. Danilevsky who stands at the origins of the organic understanding of history noted this circumstance, he cited last century European`s words, and that opinion was a consequence of a certain "mental cartography" when it seemed " from a distance " that "Russia throws its weight around, as a looming cloud, as some formidable nightmare" Danilevsky N.Ya. -М., 2000. p. 237). This allows analyzing the overall picture of the methodological specificity of logical and procedural modeling of history, to consider a dialogue of points of view: first of all, we must talk about classical models-images, contemporary strategy-projects and post-classical concepts of history. Consequently, it also makes sense to define the first thesis on the constructions of the social process: most often far from being equal -not only complementation but also opposition positions participate in the topologically presented modernity and post-modernity world view.
After all, the practice of history socio-humanitarian knowledge can acknowledge the possibility to combine the organic principle of haecceity with the requirements of the historical approach; it can prove not only the importance of the time factor but the presence of an objective variety of temporal properties of material systems. The theoretical description is unthinkable without the influence of socio-political and worldview ideals on the nature and direction of conceptual changes in science in connection with the historical evolution of the concepts of space and time. It is the question of the fact that the concepts of time and space are deeply rooted in the subject's history world. Their content includes both information about the history (time) of the object of research, and information about the history (time) of specific social goals and human needs, prompts its knowledge and practical transformation.
Based on the assumption that the leading importance of the humanistic content of scientific concepts in historical evolution, an individual is a real subject of research not only for the socio-humanitarian but also for the natural sciences -particularly when we address the processes of history that are impossible outside its material foundations. This point of view orients to the task to include the individual`s problem in the field of analysis that regularities the organic evolution, it assumes that the entire social-historical practice is included in the full definition of the subject of science both as a criterion of truth and as a practical determinant of the value orientations of an individual.
It must be concluded that the appeal to organic history in the context of the epistemology of social processes implies the use of the terms "social time", "biological time", "physical time" -the goal is to characterize the specific differences over time (history) in organic nature and society. If the attempts to introduce the definition of specific historical time are regarded as unsuccessful, if the purpose is metaphorical expression of the processes of human development, that is continuous in reality within the limits of ordinary physical time, then it is necessary to raise a more general question -the question about the specifics of social life in general.
Once again it should be emphasized: social organic matter does not cease to be nature, but nature acquires special meanings in the social dimension. The history of life is not only and not even mainly, a chronological record of life, but a causative explanation of the genesis and the change of organic forms in the processes of evolution. The opposition of the historical approach to the evolutionary one is based on the view that the causative and historical explanations are incompatible. It leads from the history of living beings to the concept of " geologic-time scale", which reflects only those aspects of biological time that are related to its duration [11]. But the fact that the evolutionary theory of life originates from the interpretation of time as an order of cause to effect dependencies that run according to the laws of continuity is not a basis for taking the thought of the organic beyond history. The erroneous identification of time with duration is the source of the absolute contraposition of the causative and historical approaches. Thus, the difficulties arise associated with the indistinguishability of physicalchemical and social evolution.
It is possible to remove this rigid opposition through the creation of a theory of biosocial evolution; social synergetics can provide greater assistance here: it is quite acceptable to talk about the homogeneity of the concepts of motion and duration that are applied in various fields of scientific knowledge. Accordingly, a subject must be defined -we can talk about it precisely because of the awareness of human dual nature that represents the obvious necessity of the interaction of physical and social subjects in the processes of history.
The actualization of model-images, contemporary strategies-projects, and post-classical concepts of history makes it possible to take into account all the basic universals and constants of modern epistemology of history. Thus, the research is directed against the abstract conception of history and a human in historical knowledge, which lacks epistemological certainty. This also suggests considering epistemological gaps in understanding the history -gaps that are consolidated by different methodological models of structuring the processes of history. After all, the very fact of gaps recognition and differences makes it possible to understand the complex conceptual unity of the philosophical representation of history and its subjects [12].
It must be stressed that the classical tradition also recognized the significance of differences in the historical process and subjects of history -in particular, Hegel was of the view of the opposition of morality and rectitude: if morality is the definition of the subjective will, then rectitude involves its objective realization in the forms of the moral spirit (family, civil society and state). And the fact that the starting point to understand the methodological Thus, when analyzing the logical-processual organization of history, it may be justifiable to refer to the actualization of the "organic turn" not only in the art of modernity but also in the wider "life creation" of modernity, which is characteristic of modernity with its interest in the problems of "glocal" existence of cultures and communities [13]. Hence, even in the ultimate virtualized representation of the historical process the fact of organically-founded creativity, affect, and aspiration namely everything that is peculiar to biopower is important. On the contrary, the destruction of the organic of existence leads, as a rule, to one or another version of the ideologization of history, for example -the domination of ontological depreciate forms of unity, that are in a material-substantial and "commodity" form, when not "man as man" but a concrete "character" of the legitimate form of the structural-symbolic organization of history acts as the "subject of freedom".
In this regard, it is possible to comprehend differently about the "crisis of definitions" that modern thought is experiencing in addressing the problem of understanding history: if the question of the processes unity of the world as a whole is not a question of positive knowledge, there is always a constructive possibility to understand the dominant strategies of structuring historical processes. The fact is that history does not appear linearly, but as a form of manifestation of human potential (intellectual, volitional, emotional) -in any case, the representation of history contains the image of a holistic culture, which has the installation of the evaluation of existence. And it is not only a certain "spirit of the time", but also a creative aspiration for the future or "image of the world" -it is for the native philosophy of life (V.V. Rozanov) that the whole organic way of existence in history is significant.
On several occasions, we have noted the proximity of the organic-image representation of history and myth in this context should be supplemented by an analysis of new forms of structuring the integrity of historical consciousness and knowledge. C. Lévi-Strauss emphasized that the elements of mythological reflection are always between percepts and concepts and, therefore, connect the present existence with the projects of thinking [15]. This is expressed in the extremely weak distinction between the objective and the subjectiveprecisely in their organic penetration into "the one is not". But in the process of moving away from the dominant mythological "symbolic animal" has an increasingly significant impact on the natural and subjective component of world images. In such cases, the subject-symbolic sphere receives as its essential definition the signs of symbolic activity of the historical subject -it is the symbolic activity of the support matrices of historical thinking that comes to the fore. And it is here, it should be emphasized, that the process of defining the actual subject of historical thinking is transferred to the system of subjectivity, which can be defined as identification [16].
About the reflexive significance of organic images in understanding contemporary historical processes, the idea of some retrospective actualization of the archaic should not arise, although this move is quite admissible, and it can be confirmed by several types of research. The definition of habitus, in sociology and social philosophy, includes conservative and archaic characteristics -habitus consists of the agent's subjective aspirations` limits, it sets the limits within which the agent creates his/her actions, it also reproduces routine, "unproblematic" actions [17]. The basis of Habitual Certainty in all variants of understanding is exactly the organic world perception, which is extremely actualized today in the situation of the "ecological turn" and humanistic-oriented ecosophy.
Conclusions
Here, it should be emphasized, the organic project is close to the ideas of Russian cosmismprecisely in terms of understanding the correspondence of the existence of nature and humans. The Eurasian definition of developmental place that is close to cosmism contains a cosmic vertical in the sense of understanding the position of a human in space. The developmental place with all its rootedness in the landscape of life is atopic: one can speak of adjusted space. It cannot be perceived, experienced, it is initially identified. But the trace of this perception is preserved in any concrete topoi: space is everywhere: "the inhabitation of adjusted space is made before the subject of experience makes a conscious reference to the object as if relying on the intentionality of object space". It is important to take this into account when including the processes of history in the modern informational and communicative context.
Today you can find the organic history images in the space of topological reflection, this confirms the fact that organic thinking is very important. V.V. Savchuk writes that one can assume reasonably that the origin of the procedure of reflection lies in the archaic sacrifices that are at the heart of rituals and mysteries. Sacrifices essentially fulfilled the same role that was to obtain a reliable basis for the order of life, its safety, and its sense of purpose that was later fulfilled by reflection. | 5,644.8 | 2021-01-01T00:00:00.000 | [
"Philosophy",
"Education",
"History"
] |
Using pulsed mode scanning electron microscopy for cathodoluminescence studies on hybrid perovskite films
The use of pulsed mode scanning electron microscopy cathodoluminescence (CL) for both hyperspectral mapping and time-resolved measurements is found to be useful for the study of hybrid perovskite films, a class of ionic semiconductors that have been shown to be beam sensitive. A range of acquisition parameters is analysed, including beam current and beam mode (either continuous or pulsed operation), and their effect on the CL emission is discussed. Under optimized acquisition conditions, using a pulsed electron beam, the heterogeneity of the emission properties of hybrid perovskite films can be resolved via the acquisition of CL hyperspectral maps. These optimized parameters also enable the acquisition of time-resolved CL of polycrystalline films, showing significantly shorter lived charge carriers dynamics compared to the photoluminescence analogue, hinting at additional electron beam-specimen interactions to be further investigated. This work represents a promising step to investigate hybrid perovskite semiconductors at the nanoscale with CL.
Introduction
Halide perovskites have emerged as exceptional candidates for next-generation optoelectronic applications, as they are high-performing photoactive materials produced at lower costs and processed in a wider range of conditions than many other traditional semiconductors [1]. Hybrid perovskite thin films are heterogeneous at the micro-and nano-length scales in their optoelectronic, structural and chemical properties [2]. Characterization techniques which reveal the structure-property relations are thus fundamental to understand this family of new materials.
Cathodoluminescence (CL) is a promising candidate for the investigation of emerging semiconductor materials [3]. In this technique, an electron beam excites a semiconductor causing emission of photons, which are subsequently collected and analysed. This allows the optoelectronic properties of the material to be probed at a high spatial resolution.
Guthrey and Moseley [4] recently reviewed the body of work on how CL has been helpful to understand these novel halide perovskite materials. Here, we focus on CL signals from scanning electron microscopy (SEM). SEM-CL can produce sub-micrometer spatially resolved maps of optoelectronic emission properties, which can be related to phase composition, defects, impurities and degradation products, both in top-view and crosssection geometries [4]. Numerous studies discuss how the focused high-energy electron beam interacts with the specimen causing reversible and irreversible changes, in a process generally known as beam damage. Methylammonium lead iodide films (MAPbI 3 ), the most widely studied hybrid perovskite structure, and subsequent hybrid perovskite compositions, are highly sensitive to the current and energy of the electron beam. These can result in knock-on damage, inelastic scattering and localized heating, which can promote the loss of 2. Methods 2.1. Fabrication of the hybrid polycrystalline perovskite film Glass coverslips (18 mm × 18 mm, 0.13-0.17 mm thickness, Academy) were cleaned in acetone and isopropanol (10 min each) in an ultrasonic bath. The substrates were treated for 10 min in an oxygen plasma cleaner immediately before the spin-coating procedure.
The perovskite precursors solutions were prepared by first dissolving PbI 2 (1.1 M), PbBr 2 (0.22 M), FAI (1.0 M) and MABr (0.2 M) in a mixture of anhydrous DMF and DMSO (4:1 v:v). CsI solution (1.5 M in DMSO) was then added to the precursor solution as 5% of the total volume. To form the (FA 0.79 MA 0.16 Cs 0.05 )Pb(I 0.83 Br 0.17 ) 3 thin films, 50 μl of precursor solution was deposited on each substrate. A two-step spin-coating procedure was used for the thin film formation: 10 s at 1000 rpm, 20 s at 6000 rpm. Chlorobenzene (120 μl) was deposited onto the spinning substrate 10 s before the end of the procedure. Films were annealed at 100°C for 1 h. Lead halide precursors were supplied by TCI, organic compounds were supplied by Greatcell Solar, CsI and solvents were supplied by Sigma.
The film was fabricated at a small fraction of PbI 2 excess, as it is known to suppress non-radiative recombination for mixed halide mixed cation compositions [19,20]. The sample had been exposed to ambient laboratory air for previous characterization measurements for ∼10 h, and was consistently contained within a nitrogen box between measurements. The CL and PL emission was checked to match the PL emission reported for similar compositions prior to the experiment.
CL hyperspectral mapping and TRCL
A series of 30 CL hyperspectral maps (CL maps) were acquired at different regions on the perovskite film. CL mapping was performed in an Attolight Allalin 4027 Chronos CL-SEM. The spectra were acquired with an iHR320 spectrometer (focal length of 320 mm, 150 gratings per mm blazed at 500 nm, 700 μm entrance slit) and an Andor 1024 pixel charge-coupled device (readout rate of 3 MHz, horizontal binning of 2 and ×2 signal amplification). All the measurements were performed at room temperature under high vacuum (<10 -7 mbar). Beam focusing before each CL map was performed on regions of the sample at least 100 μm away from those used for the measurements.
CL maps were taken at various acquisition conditions, as described in table S1 (available online at stacks.iop. org/NANOX/2/024002/mmedia) in the supporting information (SI). These maps were taken at 3 or 6 kV acceleration voltage, at various dwell times from 22 to 502 ms, and at different rastering pixel sizes ranging from 25 to 250 nm. The CL interaction depth at 3 and 6 keV is estimated to be ∼100 and ∼250 nm, respectively, as calculated in the figure S8 in the SI. The electron beam current was modified from 62.5 pA to 10 nA in continuous-wave (CW) beam mode, or from 14 to 115 pA in pulsed mode (PM). PM was obtained by pulsing an electron gun with the third harmonic of an Nd:YAG laser (355 nm) at a pulse width of 7 ps and a frequency of 80.6 MHz (12.41 ns). All beam currents were calibrated using a Faraday cup.
Time-resolved CL measurements were recorded with a time-correlated single photon counting photodetector at an acceleration voltage of 6 kV and beam current of 115 pA. The resolution of the photodetector is 80 ps. Dwell times were extended until the TRCL signal was two orders of magnitude higher than the background, which varied from 60 to 400 s for each peak of interest.
2.3. Processing of the CL data CL maps were analysed in LumiSpy 0.1 (a Hyperspy-based open-source Python library for luminescence data analysis) [21]. All spectra were background subtracted, cosmic-rays saturating the spectrometer were removed, and the edges of each map were cropped, as these tend to show higher CL intensities due to uneven beam dwelling at the corners as well as edge effects.
Fitting of the data enables the extraction of the emission shape parameters of the CL signal. Three Gaussian distributions and a constant background offset were used to fit CL data, following: where each Gaussian represents one of the three peaks of interest in this work (the perovskite, the intermediate degradation phase and the PbI 2 peaks). Each Gaussian is described by x 0 : the central peak position, I CL : the peak height at the central position and FWHM: the full-width half-maximum of the peak. Equation (1) was fitted to the spatially averaged CL spectra for each map acquired to generate figure 1. Similarly, the Gaussians were fitted to the CL maps, resulting in spatially-resolved fitted hyperspectral maps, as shown in figure 2.
TRCL decays were normalized and smoothed using a 10-point mean filter before the 1/e values were found.
Results and discussion
A systematic study of how acquisition parameters, especially beam mode and beam current, affect the CL maps of hybrid perovskite films is discussed. Under optimized conditions the perovskite phase exhibits the most robust emission features, which we consider evidence for a more pristine crystal structure.
3.1. Optimization of the conditions for CL studies on hybrid perovskite films A series of 30 CL maps were taken at different positions of a hybrid perovskite film under various acquisition conditions. For each map, the spatially averaged CL spectrum was reported. Figure 1 shows how the mean emission spectra are affected by various acquisition parameters. Figure 1(a) shows a subset of the mean spectra acquired at beam currents ranging from 14 pA to 10 nA, with darker colours representing higher beam currents; and acquired at two different electron beam modes, continuous line for CW, dashed line for PM. The absolute CL intensity values were normalized by the acquisition dwell time of each scan and all the spectra in figure 1(a) were acquired at the same pixel size of ∼120 nm.
In figure 1(a), three main peaks are observed: the main perovskite peak at 740-760 nm (1.68-1.63 eV), [19] a peak corresponding to PbI 2 at 507-518 nm (2.39-2.45 eV), [22] and a broad intermediate peak ranging between 650 and 740 nm (1.68-1.90 eV). The PbI 2 peak is primarily ascribed to the small fraction of PbI 2 excess during the fabrication of the film [19]. The broad intermediate peak is referred to as the intermediate degradation phase, as it only becomes prominent in conjunction with electron beam illumination (see figure S2 in the SI). For each peak, the quality of the CL signal was determined from the parameterized variables of fitting the Gaussian models on the mean emission spectra for each map. More specifically, the fitted central peak position (x 0 ) and peak intensity (I CL ) of the different emission peaks were taken as indicators for signal quality. These indicators were then correlated with the respective acquisition parameters at which each CL map was taken, such as beam current, beam mode, acceleration voltage, rastering pixel size, or dwell time, to mention a few. Some parameters were found to have a bigger effect on the CL signal quality than others. These are discussed in depth below.
shows the effect of the electron beam current, beam mode (either CW or PM), and dwell time, on the fitted peaks for each map. The perovskite and PbI 2 emission peaks are analysed separately, as the perovskite peak appears to be more sensitive to electron irradiation, while the PbI 2 peak remains largely unaffected upon continuous electron beam exposure of up to 90 s (see figure S2 in the SI for the evolution of each peak over beam exposure). Figures 1(c) and (e) show that the perovskite peak position x 0 is dependent on the The asterisk ( * ) shows scans with long dwell times of 502 ms. electron beam current (ranging between 14 pA to 10 nA) and the beam mode (blue for CW and orange for PM). Larger spreads of the values of the perovskite peak positions are observed under CW mode when other parameters such as pixel size or dwell time are changed. For example, marked in figure 1(e), we observe a spread of x 0 of 750±10 nm (∼50 meV) at 250 pA in CW, while at similarly low-current conditions of 115 pA in PM, a spread of x 0 shifts of 750±3 nm (∼10 meV) is measured. At comparable currents we find that PM allows for a more robust CL detection for the perovskite peak. Figure 1(c) shows a small blue-shift of a few nanometers (∼10 meV) between the perovskite peak emission in CW and PM. Blue-shifts at higher currents may be explained by the formation of beam-induced defects and by the Burstein-Moss effect, in which large charge carrier populations can saturate the band edge and populate higher vibrational energy states of the conduction band [8,11,14,23]. At 115 pA and 6 keV, charge carrier concentrations in PM are estimated to reach concentrations as large as ∼10 18 e − −h + -pairs cm −3 , assuming a cubic interaction volume of 200 nm depth (see the SI for the estimation). The pulsed nature of the beam would allow carriers to relax, if the time-delay between each electron pulse is larger than the charge-carrier relaxation time, resulting in lower blue-shifts than in CW mode. Figure 2(b), discussed later, is in agreement with this observation of blue-shifts, with spatially averaged perovskite peak emission at 746-749 nm (1.66 eV) in CW compared to 752 nm (1.65 eV) in PM. Figure 1(g) shows the effect of the perovskite peak intensity (I CL ) as a function of beam current and beam mode. For the perovskite peak, we find that PM acquisition can achieve CL intensities as large as those achieved in CW with twice the amount of current (250 pA in CW and 115 pA in PM). However, only low beam currents can be accessed in PM, which strongly limits the intensity of the CL signal.
The analysis of the PbI 2 peak shows different trends, in which most of the observations seen for the beamsensitive perovskite peak are not applicable. Figures 1(b) and (d) show a narrow spread of the peak position x 0 under both CW and PM, in the range of 3 nm (10 meV), when other parameters such as pixel size or dwell time are changed. A red-shift of the PbI 2 peak from 508-510 nm to 512-517 nm (2.43-2.44 to 2.40-2.42 eV) is observed when the largest beam current of 10 nA is used. This small red shift may be explained by the preexisting PbI 2 crystallites in the film growing thicker under excessive electron irradiation, as PbI 2 single crystals show thickness dependent photoluminescence [22]. PbI 2 degradation can also result in the creation of defects and peak shifts, consistent with these observations [24,25]. For currents of 1 nA and lower, such red shifts are not observed. Hence, PbI 2 is a more stable phase and it is not as susceptible to beam damage as the perovskite phase, as suggested from the evolution of each peak upon beam exposure (figure S2). Figure 1(f) shows the peak intensity increasing proportionally to the current used. The PbI 2 CL emission acquisition does not benefit from using PM in the same way the perovskite phase does.
Given the relative stability of the PbI 2 peak, its intensity can be used as a reference from which to compare the perovskite peak intensity for each acquisition condition. In a pristine specimen, the ratio between the perovskite (Pvk) and the PbI 2 peak intensities (Pvk:PbI 2 ratio) is expected to be larger than 1 as the perovskite peak dominates. The ratio can decrease due to electron beam irradiation, and this can be taken as a measure of beam damage. Figure 1(h) shows that the Pvk:PbI 2 ratio is strongly related to dwell time and to beam mode (also to beam current, as shown in the figure S9 in the SI). At dwell times of 52 ms the Pvk:PbI 2 ratio is one order of magnitude larger for PM than for CW, and this suggests that PM is significantly better at preserving the pristine perovskite phase. However, at extremely low-current conditions, especially in PM, longer dwell times are required to acquire signal-to-noise ratios (SNR) sufficient to discern the background from the signal at each pixel. Longer beam exposures appear to enhance the degradation effect on the perovskite, as seen at larger dwell times of 102 or 502 ms in figure 1(h). A compromise between beam current and dwell time is thus needed to minimize the changes of the perovskite peak.
Another interesting feature is the intermediate degradation phase peak, which can be fitted with a broad Gaussian between 650 and 740 nm (1.68-1.90 eV). A ratio between the Pvk and the intermediate degradation peak intensities can be calculated (intermediate:Pvk ratio) as the former appears as the latter decays. The lower this ratio, the more pristine the perovskite. Figure 1(i) shows how the intermediate:Pvk ratio is affected as the beam current increases and as CW or PM is used. The degradation intermediate phase is more prominent when high beam currents above 1 nA are used in CW mode, as seen from the ratio increasing by one to two orders of magnitude than when lower currents are used. Both for low CW currents of 62.5 and 250 pA and for PM, the degradation peak is low in intensity. Long dwell times in PM are correlated with the appearance of this degradation phase, marked with an asterisk on the figure. Dose, the combination of beam current and dwell time, is thus affecting the appearance of the intermediate degradation phase.
The appearance of higher energy emission features than the perovskite phase is in agreement with other works studying electron beam damage [6,26,27]. While the nature of the intermediate degradation phase cannot be conclusively assigned from this CL analysis alone, it can be attributed to a series of factors. The low activation energy of halide-related defects enhances the vulnerability of these materials to beam-induced degradation, [28,29] which results in a broad distribution of trap states that can be optically active and detectable if present in large densities. Moreover, loss of the more volatile iodine species due to electron beam irradiation could also result in the observed blue shifts. Halide demixing across films has been reported after light soaking or device operation, showing facile halide redistribution between the surface and interface [30][31][32]. The formation of beam-induced small nanoscale crystallites could also result in a distribution of blue-shifted emission due to confinement effects [33]. Finally, the blue shift could also be attributed to electron beam driven amorphization of the perovskite phase, in agreement with similar high-energy emission peaks observed from pressure-induced amorphization [34].
Other acquisition parameters were found to have a smaller effect on the CL signal quality, such as the field of view, raster scan size, or electron beam energy, which are shown in figures S5-S7 in the SI. The effect of the first two is convolved with the dwell time effect, already discussed. The effect of acceleration voltage is consistent with previous studies, [8,27,[35][36][37][38] in which lower acceleration voltages probe the optical properties at the surface while at higher beam energies the bulk is probed, which can emit differently than the surface. Photons generated deeper in the bulk can also be absorbed by the perovskite layer and not contribute to the collected CL intensity. In general, beam damage is more sensitive to increasing beam current than to acceleration voltage [26].
In short, PM has been found beneficial for the acquisition of CL on triple-cation double-halide hybrid perovskite films, in comparison to CW mode. However, when PM is used, the beam current and dwell time need to be carefully adjusted in order to obtain sufficient SNR while maintaining the perovskite emission. We find that >115 pA and<50 ms, under the experimental conditions and samples reported here, give the best results. Despite the optimization in the acquisition, PM CL still exhibits some secondary features related to beam damage.
The potential of pulsed mode SEM CL on hybrid perovskite films
We use SEM CL to map the nanoscale heterogeneity of perovskite polycrystalline films. Figure 2 shows a series of CL maps for the fitted perovskite peak taken at different positions on the same perovskite film. The perovskite peak emission changes for spatially resolved CL maps as a function of acquisition conditions. We observe a gradual modification of the perovskite peak quality, measured in terms of the fitted Gaussian position x 0 ( figure 2(b)), peak height I CL ( figure 2(c)), and peak full-width half-maximum (FWHM, figure 2(d)), as a function of beam current. In general, the CL maps acquired at higher currents show more heterogeneous distributions of the perovskite peak features. Figure 2(e) shows a lower intermediate:Pvk intensity ratio for lower beam currents, suggesting a more pristine perovskite emission due to a reduction in beam damage and more homogeneous charge carrier recombination pathways. The least significant beam damage is recorded under PM, at a mean intermediate:Pvk ratio of 0.18 (see figures S12 in the SI for the histograms of the pixel distribution of this ratio). In PM, longer pixel dwell times, twice as long as in CW mode (from 22.28 ms in CW to 52.28 ms in PM), were used. These dwell times were short enough, within the threshold shown in figure 1(h), to result in no increment of beam damage. CL maps acquired in PM at lower currents of 14 or 33 pA resulted in reduced signal quality if dwell times of 52 ms were used, which complicated spectral fitting in a per-pixel basis (figure S11 in the SI).
The switch from CW to a PM electron beam under similar low-current conditions of 62.5 and 250 pA in CW and 115 pA in PM, reveals a further improvement in data acquisition in terms of peak consistency. Figure 2(d) shows a reduction in the FWHM magnitude and a more homogeneous distribution for the PM than in CW mode. Similarly, figure 2(b) shows a more homogeneous peak position distribution for PM.
Finally, the use of the optimized PM acquisition conditions enables the study of TRCL decays of the different phases found in the films. Figure 3 shows the acquisition of spatially averaged TRCL decays on a hybrid perovskite polycrystalline film for the first time. Figure 3(a) shows the three peaks of interest selected for the TRCL signal at 507, 663 and 747 nm (2.45, 1.87 and 1.66 eV) for the PbI 2 , intermediate degradation phase and perovskite phase, respectively. The PbI 2 and intermediate phases are shorter lived than the perovskite phase, as shown in figure 3(b). The 1/e time decay value for the perovskite peak is found at 0.7±0.1 ns. The lifetimes for the PbI 2 and intermediate phase TRCL decays could not be resolved as they are below the resolution limit of the detector (80 ps). The understanding of the dynamics and nature of these phases are beyond the aims of this work, hence we only present the 1/e value and no fitting models.
Despite the optimization of the experimental conditions used to acquire the TRCL data, the perovskite peak degraded continuously, as shown in the inset in figure 3(b). The inset shows the time evolution of the incoming photon counts to the time-correlated single photon counting photodetector for each peak over continuous beam exposure in PM. For the perovskite peak, the spikes in the photon count correspond to each of the 6 different regions of the sample that were scanned during TRCL acquisition. The perovskite emission completely degraded within 50-100 s of continuous beam rastering (similar to the spectral time evolution shown in figure S2 in the SI). In order to acquire SNR of two orders of magnitude, several different regions were rastered. For the PbI 2 and the degradation higher energy peaks, only a single region was scanned to achieve the desired SNR.
The PbI 2 and the intermediate degradation phase exhibit extremely short-lived dynamics. These short TRCL curves may be attributed to the highly localized formation of these phases, in which charge carriers are produced in a confined volume at high densities, thus affecting lifetimes due to the nature of the bimolecular recombination. For the perovskite peak, the longer-lived carrier dynamics may be attributed to the large grains in the film, which can disperse the charge carriers, formed locally by the electron beam, across the grain. However, these lifetimes are significantly shorter than time-resolved photoluminescence measurements of similar compositions, in the order of hundreds of nanoseconds [39]. These shorter lifetimes may suggest degradation of the perovskite phase by the creation of beam-induced defects, which are visible in the broad intermediate degradation peak and act as charge carrier quenching pathways. Moreover, Auger processes may play a role due to the higher carrier densities produced in CL compared to PL, of the order of ∼10 18 e − −h + -pairs cm −3 , [40] which has also been seen in less beam-sensitive materials [41]. Similar perovskite formulations, containing mixed iodide and bromide, are known to show higher Auger rates when the bromide fraction increases due to gradual changes in phase structure, which may here be caused by beam degradation [39].
In short, figures 1 and 2 have shown a more efficient data acquisition of the perovskite emission when a PM electron beam is implemented. Given a CW and a PM electron beam at the same beam current, the PM beam produces short pulses of higher electron currents while the CW produces a constant flux of lower electron currents, both resulting in the same effective current. The higher excitation density in PM can produce larger densities of charge carriers which could saturate non-radiative recombination sites and lead to overall stronger emission. In CW mode, if currents are too low, the non-radiative recombination sites may never be saturated and a smaller fraction of recombination would be radiative. It is therefore the nature of a pulsed beam that could allow for a more efficient CL acquisition (see schematic S13 in the SI). These mechanisms may not be applicable to materials with longer lifetimes, as charge carriers would remain in the excited state for a longer time and the PM would give similar effects as the CW beam. In these cases, the higher efficiency in CL acquisition using PM would not be as pronounced. However, given that electron beam damage may be unavoidable for beam sensitive materials such as hybrid perovskites, the TRCL lifetimes are likely to be shorter than those measured with optical excitations of the pristine structure, in favour of PM.
Outlook
We have shown the benefits of using a PM electron beam on hybrid perovskite films. It unlocks the use of SEM CL on beam-sensitive hybrid perovskite materials and enables the acquisition of hyperspectral maps with high spatial resolution. CL can thus be used to explore the heterogeneity of optical properties at the nanoscale, at smaller length scales than photoluminescence. TRCL, enabled by the use of a PM electron beam, can be useful to understand the carrier dynamics of these materials at high excitation densities, especially interesting for light emission devices.
CL on beam-sensitive materials is affected by beam damage, and hence must be acquired under scrupulous management of the acquisition parameters. After optimization of the acquisition conditions, CL still exhibits features related to the beam damage; unambiguously assigning the nature of these degradation features in the scanned regions of interest will be the subject of future work. In this work we have analysed mainly the effect of beam current, dwell time and beam mode, yet other parameters should be further investigated. For example, it was shown that temperatures as low as 80 K can hinder the formation of intermediate degradation high-energy peaks in mono-cation mono-halide perovskite compositions [8]. The modification of the pulse rate in PM electron beams may also further improve the CL acquisition for beam sensitive materials, as longer separations between the pulses may allow the beam sensitive material to fully relax electronically and thermally. Finally, further work on sample stabilization may result in beam damage mitigation. For example, the addition of contact layers or the characterization of devices instead of films may help dissipate charges and prevent volatile species from leaving the sample [42]. Such approaches will be the subject of future work.
Conclusion
We have systematically studied the parameters affecting the acquisition of CL maps of hybrid perovskite (FA 0.79 MA 0.16 Cs 0.05 )Pb(I 0.83 Br 0.17 ) 3 films on glass, such as the effect of beam current, dwell time or beam mode. PM electron beams have been found to be useful for the study of triple-cation double-halide perovskite films, yielding more robust results than CW mode. Using PM, the CL spectra strongly resembles pristine perovskite emission, in which the perovskite emission is the strongest peak. Even in optimized conditions, some effect related to beam damage is persistently observed in the form of an intermediate broad peak at higher energies. TRCL of the polycrystalline film showed short lived charge carriers dynamics compared to photoluminescence, suggesting additional electron beam-specimen interactions to be further investigated. The optimization described in this work will help to unlock the use of CL hyperspectral mapping and TRCL on the more beamsensitive hybrid perovskite compositions.
As SEM-CL systems with PM capabilities become prevalent and equipped with more sensitive and faster detectors, we anticipate that CL will play a large role in not only resolving the complex heterogeneity of the materials in the family of hybrid perovskites, but also in understanding the properties and degradation of many other novel beam sensitive semiconductors. | 6,566 | 2021-05-14T00:00:00.000 | [
"Physics",
"Materials Science"
] |
A New Analogue of Echinomycin and a New Cyclic Dipeptide from a Marine-Derived Streptomyces sp. LS298
Quinomycin G (1), a new analogue of echinomycin, together with a new cyclic dipeptide, cyclo-(l-Pro-4-OH-l-Leu) (2), as well as three known antibiotic compounds tirandamycin A (3), tirandamycin B (4) and staurosporine (5), were isolated from Streptomyces sp. LS298 obtained from a marine sponge Gelliodes carnosa. The planar and absolute configurations of compounds 1 and 2 were established by MS, NMR spectral data analysis and Marfey’s method. Furthermore, the differences in NMR data of keto-enol tautomers in tirandamycins were discussed for the first time. Antibacterial and anti-tumor activities of compound 1 were measured against 15 drug-sensitive/resistant strains and 12 tumor cell lines. Compound 1 exhibited moderate antibacterial activities against Staphylococcuse pidermidis, S. aureus, Enterococcus faecium, and E. faecalis with the minimum inhibitory concentration (MIC) values ranged from 16 to 64 μg/mL. Moreover, it displayed remarkable anti-tumor activities; the highest activity was observed against the Jurkat cell line (human T-cell leukemia) with an IC50 value of 0.414 μM.
Introduction
With the emergence of newer resistant forms of infectious diseases and multi-drug resistant (MDR) bacteria and tumors, it has become essential to develop novel and more effective antibiotics [1]. In recent years, numerous studies have discovered that marine-derived actinomycete strains, mainly Streptomyces species, have the ability to produce a wide variety of biologically active and structurally unique metabolites. Some of these compounds possess strong antibacterial and anti-tumor activities [2][3][4]. The immense diversity of marine actinomycetes, along with their underutilization, has attracted great attention from researchers to discover novel antibiotics [5][6][7][8].
The strain LS298 was obtained from a marine sponge Gelliodes carnosa collected from the South China Sea. Based on the 16S rRNA sequence (GenBank accession number FJ937945) analysis [9] and the morphology, this strain was preliminarily identified as Streptomyces sp. Our previous studies have shown that the secondary metabolites of this strain contain echinomycin, cyclic dipeptides, and esters [10]. Among these compounds, echinomycin, a bifunctional DNA intercalator, is the predominantly and biologically active constituent against the Gram-positive and Gram-negative bacteria and also shows good anti-tumor activity [11][12][13][14]. Our continued search for echinomycin analogues or other novel antibiotics from extracts of large scale fermentation led to the isolation of two new compounds quinomycin G (1) and cyclo-(L-Pro-4-OH-L-Leu) (2), as well as three known compounds tirandamycin A (3), tirandamycin B (4), and staurosporine (5) (Figure 1). Structurally, quinomycin G (1) possessed a terminal double bond in one of the Ser groups. Cyclo-(L-Pro-4-OH-L-Leu) (2) was a new cyclic dipeptide. Tirandamycin A (3) was the 1-enol-4′-keto form, while tirandamycin B (4) was 1-keto-4′-enol form. It is the first time to reveal this form of tirandamycin B explicitly. In addition, antibacterial and anti-tumor activities of compound 1 were evaluated against 15 drug-resistant/sensitive strains and 12 tumor cell lines.
Structure Elucidation of Compounds 1-5
Quinomycin G (1) was obtained as an amorphous yellow powder, a molecular formula of C51H64N12O12S2 was determined by HRESIMS (m/z 1101.4288 [M + H] + , calcd for C51H65N12O12S2, 1101.4286), requiring 26 degrees of unsaturation. The chemical structure of 1 was adumbrated as an echinomycin analogue by the close similarity of its molecular formula and ultraviolet spectral properties (λmax (log ε) 245.2 nm (2.6), 325.8 nm (1.9), respectively) to those of echinomycin [10]. The 1 H NMR spectrum of 1 (Table 1) Figure S8) and HSQC of compound 1, indicated that compound 1 was comprised of two quinoxalines and eight amino acid moieties (two N-Me-Val, two Ala, two N-Me-Cys, one Ser, and one Dehydroxy-Ser) ( In the HMBC spectrum, the methylene protons (δH: 6.90 (1H, brs), 6.11 (1H, brs)) to C=O (δC: 163.2), confirmed that the double bond originated from the Ser. On the basis of the above information, all protons and carbon resonances were assigned and the planar structure of compound 1 was established. Because the planar differences in the structures of compound 1 and echinomycin cause the changes on spatial configurations, the NMR spectral data, especially 1 H NMR spectral data of compound 1 were different with that of echinomycin. The appearance of double bond of Dehydroxy-Ser may make the quinoxaline, amide, alkene, and carbonyl groups form a large conjugate plane (Supplementary Materials Figure S9). The CH3 of the Ala′ positioned in the shielding area, so its 1 H NMR spectral data upfielded to δH: 0.19. Marfey's method was employed to assign the absolute configurations of the amino acid residues resulting from acid hydrolysis of 1 [15,16]. The 1-fluoro-2,4-dinitrophenyl-5-L-alanine amide (FDAA) derivatives of the acid hydrolysate of 1 and the authentic D-and L-amino acids were subjected to HPLC analysis. The absolute configurations of all amino acid residues in 1 except for N-Me-Cys were established by comparing their HPLC retention times with those of the corresponding authentic D-and L-amino acid standards ( Table 2 Thus, as shown in 1 (Figure 1), the absolute stereochemistry of this novel echinomycin analogue was assigned and it was given the trivial name quinomycin G.
Subsequently, this inspired us to study the structural distinction between them. The literature survey indicated that the substituent groups on bicyclic ketal moiety have little influence on the NMR spectral data of the long conjugated system [17][18][19][20][21][22]. Therefore, we proposed that the distinct differences of NMR spectral data were caused by the positions of the enolic hydroxy and carbonyl group. Compared with C-1 at δC: 173.5 in tirandamycin A (3), the data of C-1 in tirandamycin B (4) moved to the downfield at δC: 181.0, implying that 4 should be in 1-keto-4′-enol form. The keto-enol tautomer existed widely in the structure of natural products, and the rules of NMR data of these two tautomers have been studied [23], which also supported that tirandamycin B (4) was the 1-keto-4′-enol form. It is the first time to reveal the 1-keto-4′-enol form of tirandamycin B explicitly. Because the structures of 1-keto-4′-enol form of tirandamycins were unclear previously, the assignments of the NMR data of these compounds were not correct [21,22]. Herein, we summarized the trend in NMR data of keto-enol tautomers exist in tirandamycins in order to raise concern on the structural and NMR data differences between these two forms. In the 13 C NMR spectrum, when the structure was in 1-enol-4′-keto form just like tirandamycin A, the carbon signals occurred at approximately δC: 173.5 (C-1), 116.2 (C-2), 147.9 (C-3) and 143.7 (C-5), however the carbons of the 1-keto-4′-enol form as tirandamycin B resonated at approximately δC: 181.0 (C-1), 124.8 (C-2), 143.2 (C-3) and 137.9 (C-5). More importantly, three olefinic protons had obvious differences in these two tautomers in the 1 H NMR spectrum, δH: 7.05 (H-2) 7.47 (H-3) and 6.19 (H-5) in 1-enol-4′-keto form changed to δH: 7.55 (H-2) 7.14 (H-3) and 5.81 (H-5) in 1-keto-4′-enol form. The chemical shift of H-2 at δH: 7.55 increased abnormally, which may be due to the shielding effect of the double bond (1C=O). According to the above results and literature survey [17][18][19][20][21][22]24], we also summed up a brief rule that if the 1 H NMR data of H-5 is more than δH: 6.00, the structure of tirandamycin is in 1-enol-4′-keto form, otherwise, it is in the other form. (4) were employed for studying the tautomerizm of keto-enol tirandamycins and the test temperatures were set at 40, 60, 80 °C. With the increase of test temperature, the structure of tirandamycin A (3) was still in the 1-enol-4′-keto form (Supplementary Materials Figure S21), but the structure of tirandamycin B (4) gradually transformed to 1-enol-4′-keto form (Supplementary Materials Figure S26). The results suggested that in the DMSO-d6 solution, tirandamycin A (3) was stable in 1-enol-4′-keto form, while tirandamycin B (4) was more stable in 1-keto-4′-enol form than the other form. The reason may be the structure itself or external factors, which need to be further investigated.
The known antibiotic staurosporine (5) was characterized by comparison of the respective spectral data (MS, 1 H, 13 C NMR) with those found in the literature [25].
Biological Assays
The novel echinomycin analogue compound 1 was assayed for antibacterial activities against (Table 6).
Bacterial Material and Fermentation
The producing strain LS298 was isolated from a sponge Gelliodes carnosa collected from Lingshui Bay, Hainan Province of China near Xincun Harbor (18°24′5.49″ N, 109°59′37.76″ E), in August 2007 [9]. It was identified as Streptomyces sp. on the basis of the morphology and 16S rRNA gene sequence analysis by comparison with other sequences in the GenBank database. The DNA sequence was deposited in GenBank (Accession No. FJ937945). The strain LS298 was first cultivated on Gause I agar plates (Gause I: starch 20 g; KNO3 1 g; NaCl 0.5 g; K2HPO3 0.5 g; MgSO4 0.01 g; Natural seawater 1 L; pH 7.0-7.2) at 28 °C for three days. Then, the mycelia were inoculated into 500 mL Erlenmeyer flasks, each containing 100 mL of liquid A1 medium (A1: starch 10 g; Yeast extract 4 g; Peptone 2 g; Natural seawater 1 L; pH 7.0-8.0). The flasks were incubated at 28 °C on a rotary shaker (200 rpm) for three days. Seed culture (10 mL) was transferred into three hundred 500 mL Erlenmeyer flasks (each Erlenmeyer flask contained 100 mL A1 medium) and incubated at 28 °C on a rotary shaker (200 rpm) for nine days.
Hydrolysis of Compounds 1-2 and HPLC Analysis by Marfey's Method
Compounds 1 (1.0 mg) and 2 (1.4 mg) were dissolved in 6 N HCl (1 mL), and heated at 110 °C for 18 h. After cooling to room temperature, the hydrolysates were dried under reduced pressure and resuspended into 100 μL of H2O.Then they were treated with 1 M NaHCO3 (25 μL), and reacted with 100 μL of 1% (w/v) FDAA in acetone at 40 °C for 1.5 h. After cooling to room temperature, the mixture was added with 1 M HCl (25 μL) to neutralize and terminate the reaction. MeOH was then added to the quenched reaction to afford a total volume of 500 μL; 10 μL of each hydrolysate derivatization reaction was used for HPLC analysis using an Agilent C18 column (150 × 4.6 mm, 5 μM) with a solvent gradient from 15% to 45% solvent B (solvent A: CH3COOH/H2O, 0.05/99.95, solvent B: CH3CN) over the course of 30 min and UV detection at 340 nm at a flow rate of 1 mL/min. Similarly, 10 μL of the standard amino acids in H2O (4 μM) were added to 1 M NaHCO3 (20 μL) and each mixture was treated with 1% (w/v) FDAA (50 μL) for 1.5 h at 40 °C. Derivatization reactions were terminated with 1 M HCl (20 μL) and diluted to a total volume of 500 μL with MeOH. Of these standard amino acid derivatization reactions, 10 μL was subjected to HPLC analysis and used as structural standards in the elucidation of structures 1 and 2.
Biological Assays
Antibacterial and anti-tumor assays were performed with compounds of purity >90% by HPLC. (VSE), 09-9 (VRE)), which included strains from the ATCC collection and clinical isolates. MIC values against the 15 bacterial strains for compound 1 were measured by using the agar dilution method described by the Clinical Laboratory Standards Institute [26]. Briefly, the test medium was Mueller-Hinton broth, and the inoculum was 10,000 colony forming units (CFU)/spot. The compound 1 was incorporated into the agar medium, with each plate containing a different concentration of the compound. Culture plates were incubated at 35 °C for 18 h, and MICs were then recorded. The positive controls were levofloxacin and echinomycin. The final concentrations of compounds ranged from 0.03 to 128 μg/mL. The MIC was defined as the lowest concentration that prevented visible growth of the bacteria [27].
The human colonic carcinoma (HCT-116), human hepatoma (HepG2), human gastric cancer (BGC-823), human non-small cell lung cancer (NCI-H1650), human ovarian cancer (A2780), human pancreatic cancer (SW1990, Mia-PaCa-2), human multiform glioblastoma (U87 MG), human neuroblastoma (SK-N-SH), human renal clear cell carcinoma (ACHN, 786-O) were maintained in DMEM medium; human T-cell leukemia (Jurkat) was maintained in RPMI 1640 medium. Both media were supplemented with 10% heat inactivated fetal bovine serum, 100 units/mL of penicillin and 100 μg/mL of streptomycin, in a humidified 5% CO2/air atmosphere at 37 °C. MTT assay: briefly, logarithmic cells were digested with 0.25% pancreatic enzyme-EDTA and plated in the 96-well plates at concentration of 800-2000/100 μL per well. Compounds at final concentrations of 0.5 to 50 μg/mL were added with triplicates of each concentration after 24 h. The cells were incubated further at 37 °C for 96 h, the medium was aspirated, and 100 μL MTT of 0.5 mg/mL in medium was added. After 4h incubation, the medium was aspirated and 200 μL DMSO was added to solubilize the formazan crystals. Absorbance of the converted dye was measured at a wavelength of 570 nm with background subtraction at 650 nm. The dose-response curves were fitted with Sigma plot and IC50s were determined.
Conclusions
In summary, a novel echinomycin analogue quinomycin G (1), a new cyclic dipeptide cyclo-(L-Pro-4-OH-L-Leu) (2), along with three known antibiotics tirandamycin A (3), tirandamycin B (4) and staurosporine (5), were isolated and characterized from the marine Streptomyces sp. LS298. To our knowledge, this was the first time to obtain three types of antibiotics from one strain from the same batch, though these types of antibiotics have been isolated from different strains of genus Streptomyces [18,32]. What is more, the 1-keto-4′-enol form of tirandamycin B was reported firstly, and the trend in NMR data of keto-enol tautomers in tirandamycins was discussed.
Compound 1 exhibited moderate antibacterial and remarkable anti-tumor activities; however, its activities were lower than those of echinomycin, which indicated that the bicyclic peptide of these compounds was required for their activities. Echinomycin has been studied for many years and the mechanism of its antibacterial and antitumor activities is considered to be DNA bis-intercalation. Due to the similar structure, we proposed that compound 1 has a similar mechanism against the bacteria and the tumor cells. Efforts are underway to discover novel and potential echinomycin analogues in future work through combinations of genome mining and heterologous expression approaches. | 3,262 | 2015-11-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Room-Temperature Performance of Poly(Ethylene Ether Carbonate)-Based Solid Polymer Electrolytes for All-Solid-State Lithium Batteries
Amorphous poly(ethylene ether carbonate) (PEEC), which is a copolymer of ethylene oxide and ethylene carbonate, was synthesized by ring-opening polymerization of ethylene carbonate. This route overcame the common issue of low conductivity of poly(ethylene oxide)(PEO)-based solid polymer electrolytes at low temperatures, and thus the solid polymer electrolyte could be successfully employed at the room temperature. Introducing the ethylene carbonate units into PEEC improved the ionic conductivity, electrochemical stability and lithium transference number compared with PEO. A cross-linked solid polymer electrolyte was synthesized by photo cross-linking reaction using PEEC and tetraethyleneglycol diacrylate as a cross-linking agent, in the form of a flexible thin film. The solid-state Li/LiNi0.6Co0.2Mn0.2O2 cell assembled with solid polymer electrolyte based on cross-linked PEEC delivered a high initial discharge capacity of 141.4 mAh g−1 and exhibited good capacity retention at room temperature. These results demonstrate the feasibility of using this solid polymer electrolyte in all-solid-state lithium batteries that can operate at ambient temperatures.
effectively dissolving lithium salts [28][29][30][31][32][33][34] . Sun et al. reported poly(trimethyl carbonate)-based polymer electrolytes. However, their ionic conductivities were lower than 10 −8 S cm −1 at room temperature, thus, cells assembled with this polymer electrolyte could only be operated at a very low current rate (1/55 C) 28,29 . Tominaga's group reported solid polymer electrolytes based on commercially available poly(ethylene carbonate) (PEC), which showed high ionic conductivity and favorable lithium transference number at room temperature [30][31][32][33] . However, they could not be applied to rechargeable lithium batteries without a supporting membrane due to their poor dimensional stability 32 . Organic-inorganic hybrid solid electrolytes based on poly(ethylene oxide-co-ethylene carbonate) and octa-aminopropyl polyhedral oligomeric silsesuioxane were prepared and applied to solid-state lithium batteries 34 . However, the solid-state lithium batteries assembled with V 2 O 5 cathode material could only be operated at high temperatures (~60 °C).
In this study, we synthesized poly(ethylene ether carbonate) (PEEC) via ring-opening polymerization of ethylene carbonate. This material showed an amorphous structure with a low glass transition temperature. Solid polymer electrolytes were then prepared with PEEC and lithium salt by a solution casting method, and their electrochemical properties were investigated. In order to improve the mechanical strength of the polymer electrolyte, a three-dimensional cross-linked polymer electrolyte was synthesized by photo cross-linking reaction using tetraethyleneglycol diacrylate (TEGDA) as a cross-linking agent. The cross-linked solid polymer was applied to the all-solid-state lithium cells composed of a lithium anode and a layered LiNi 0.6 Co 0.2 Mn 0.2 O 2 cathode, and their electrochemical performance was evaluated at ambient temperatures.
Results and Discussion
The chemical structure of PEEC was characterized by analyzing its 1 H and 13 C NMR spectra, and the peak assignments were performed using two-dimensional (2D) NMR spectroscopy. Figure 1 shows the 1 H and 13 C NMR spectra of PEEC, with the peak assignments. In the 1 H NMR spectrum of PEEC, the main peaks at 4.29 and 3.73 ppm were be assigned to the protons adjacent to the carbonate unit and the ether oxygen, respectively 34 . This confirms the presence of both ethylene carbonate and ethylene oxide units in the synthesized polymer. The molar ratio between the ethylene carbonate and ethylene oxide units in PEEC could be calculated from the integration ratio of the corresponding proton peaks (ethylene carbonate: 1,4 and ethylene oxide: 2,5) in Fig. 1b. As a result, the molar ratio of the ethylene carbonate and ethylene oxide unit was 49.5:50.5. In the 13 C NMR spectrum of Fig. 1c, the two strong carbon peaks observed at 68.0 and 67.1 ppm could be assigned to the carbon adjacent to the ether oxygen and the carbonate group, respectively. Correlation spectroscopy (COSY) and heteronuclear single-quantum correlation spectroscopy (HSQC) were used to identify the detailed structure of PEEC. The resulting spectra are shown in Figs 2 and S1, respectively. In the COSY spectrum of PEEC (Fig. 2), the cross peaks of 2/1, 5/6, 7/4 and 4/6 are clearly shown. From the cross peak of 2/1 that has the strongest intensity, it is confirmed that ethylene oxide and ethylene carbonate share a vicinal coupling. Molar composition and average molecular weights of PEECs obtained at different reaction conditions are summarized in Table 1. Based on the data in this table, the PEECs obtained at different reaction conditions showed an almost same molar ratio between the ethylene oxide and ethylene carbonate. GPC results show that the average molecular weight of PEEC decreases as the amount of catalyst increases and the reaction time decreases.
DSC analysis was performed to investigate the thermal behavior of PEEC and PEEC-based polymer electrolytes, and the results are shown in Fig. 3. The salt concentration in the polymer electrolyte is expressed as a molar ratio of ([EO]+[EC])/[LiTFSI], as given in Table 2. In the DSC thermogram of PEEC without lithium salt, no melting transition peak was observed, indicating that PEEC is an amorphous polymer. The glass transition temperature (T g ) of PEEC was measured to be −34 °C, which means that the synthesized PEEC is a flexible rubbery polymer with high segmental motion at ambient temperatures. It can be confirmed that the T g value of PEEC lies between those of PEO (−64 °C) and PEC (9 °C). By adding lithium bis(trifluoromethane) sulfonylimide (LiTFSI) salt into PEEC at a molar ratio ([EO]+[EC])/[LiTFSI] of 16, the T g value slightly increased. This result is due to the physical cross-linking effect, which results from a strong interaction between ether oxygen atoms and lithium ions 35 . However, the T g values of PEEC-based polymer electrolytes decreased when the concentration of lithium salt was further increased (from PEEC-16 to PEEC-1). This behavior can be explained by the increased conformational mobility of the polymer backbone that occurs with increasing salt concentrations, as has been previously reported 30 . Moreover, the plasticizing of host polymer with an addition of TFSI anion is also attributed to the decrease in T g values of polymer electrolytes. From thermogravimetric analysis (TGA) (Fig. S2), it is confirmed that the PEEC-based polymer electrolyte is thermally stable up to about 180 °C.
To investigate the chain conformation of the PEEC containing lithium salt in the polymer electrolyte, FT-IR analysis was performed. The band observed at 2960 cm −1 for neat PEEC in Fig. 4a can be assigned to the CH 2 stretching vibration of the gauche conformation of the C-C bond on the PEEC main chain. As the salt concentration increases, a new band appears at 2950 cm −1 , which corresponds to the CH 2 stretching vibration of normal alkanes. The band at 1740 cm −1 of neat PEEC in Fig. 4b can be identified as the stretching vibration of the carbonyl group on the PEEC main chain. The interacting band appears below 1720 cm −1 when the salt concentration is increased. These results indicate that the local rotational motion of the PEEC chain is enhanced by the addition of lithium salt, as schematically illustrated in Fig. 4c. Therefore, the T g values of PEEC-based polymer electrolytes decreased with increasing salt concentrations, and the dissociated ions could migrate faster as a result of the improved segmental motion at high salt concentrations. Enhanced segmental motion with increasing salt concentrations in the poly(ethylene carbonate)-based polymer electrolytes has also been reported by Tominaga's group [30][31][32][33]36 . An increase in the intensity of band at 1720 cm −1 with increasing salt concentration suggests that PEEC can dissolve a large amount of lithium salt.
The ionic conductivity of the polymer electrolyte was determined by ac impedance measurements using the cells with blocking electrodes. Figure 5a shows the temperature dependence of the ionic conductivities for the PEEC-based polymer electrolytes with different salt concentrations. The ionic conductivities of the PEO-based polymer electrolytes (MW of PEO: 6,000 and 200,000) are also shown for comparison. As expected, the PEO-based solid polymer electrolytes exhibited low ionic conductivities at ambient temperatures. Note that use of PEO with low molecular weight (MW: 6,000) slightly increases the ionic conductivity due to higher ionic mobility by the enhanced segmental motion. However, it was difficult to prepare a free-standing film when using low molecular weight PEO. Thus, the PEO-based polymer electrolyte prepared with only high molecular weight PEO was considered as a control sample for further studies. Below melting transition temperature of PEO, the activation energies for ionic conduction in the PEO-based polymer electrolytes are relatively high due to its high degree of crystallinity. On the other hand, the PEEC-based polymer electrolytes showed higher ionic conductivities than the PEO-based polymer electrolytes at room temperature, and the ionic conductivity continuously increased with increasing salt concentrations, as depicted in Fig. S3. The higher ionic conductivities of the PEEC-based polymer electrolytes at ambient temperatures are believed to be due to the amorphous character of PEEC, as mentioned earlier in the DSC results, this is the case because high ionic conductivity is necessarily associated with the amorphous phase of the polymer 37 . Also, the more favorable dissociation of lithium salt in PEEC can be attributed to the higher ionic conductivity, which is due to the fact that the ethylene carbonate has a higher dielectric constant for dissolving salt than the ethylene oxide moiety 25,26 . As discussed in the DSC and FT-IR results, the addition of lithium salt into PEEC improves the segmental motion of polymer chain and decreases the glass transition temperature of the polymer electrolytes (Fig. S3). Thus, the increase in ionic conductivity with increasing salt concentration can be ascribed to both the increase in ionic mobility and the number of charge carriers. It was found that the temperature dependence of the ionic conductivities for the PEEC-based polymer electrolyte exhibited Vogel-Tamman-Fulcher (VTF) behavior throughout the temperature range investigated in this study, as has been reported in other amorphous polymer electrolytes 37,38 . This result suggests that the ionic conduction in PEEC mainly depends on the segmental motion of the polymer chain. Although the PEEC-based polymer electrolyte with high salt concentration (PEEC-1) exhibited high ionic conductivity at room temperature, it was difficult to handle due to its poor mechanical stability, thus, it could not be directly applied to the solid-state lithium cell. In order to improve the dimensional stability of the polymer electrolyte, a three-dimensional cross-linked polymer electrolyte was synthesized via photo cross-linking reaction using PEEC-1 and TEGDA as a cross-linking agent. The resulting solid polymer electrolyte was in the form of a freestanding flexible and rubbery thin film, as depicted in Fig. S4. The thickness of the cross-linked solid polymer electrolyte film ranged from 60 to 100 μm. As shown in Fig. 5a, the cross-linking of PEEC caused a slight decrease in the ionic conductivity compared to that of the non-cross-linked polymer electrolyte (PEEC-1). The ionic conductivity of the cross-linked polymer electrolyte (XPEEC-1) is 1.6 × 10 −5 S cm −1 at room temperature, which is still two orders of magnitude higher than that of the PEO-based polymer electrolytes. The electrochemical stability of various polymer electrolytes was evaluated by linear sweep voltammetric (LSV) measurements at 55 °C, and the resulting LSV curves are shown in Fig. 5b. It should be noted that LSV measurements were performed at 55 °C, because the ionic conductivity of PEO-based polymer electrolyte (PEO-16) was too low to measure oxidative current at ambient temperature. As shown in the figure, the oxidative current started to increase around 4.5 V vs. Li/Li + in the PEO-based polymer electrolyte, which can be attributed to the oxidative decomposition of PEO. In contrast, the PEEC-based polymer electrolyte (PEEC-1) exhibited electrochemical stability above 4.9 V, indicating that introducing the carbonate units into the polymer backbone improved the oxidative stability of the polymer electrolyte. Furthermore, this electrochemical stability is better than other solid polymer electrolytes such as poly(vinyl carbonate) 39 . The cross-linking of the PEEC-based polymer electrolyte by TEGDA hardly affected the oxidative stability of the cross-linked polymer electrolyte (XPEEC-1). Based on these results, the PEEC-based cross-linked polymer electrolyte exhibits higher ionic conductivity and electrochemical stability than the PEO-based solid polymer electrolyte, which makes it suitable for application in a solid-state Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cell. The lithium transference number (t + ) in the PEEC-based cross-linked polymer electrolyte (XPEEC-1) was measured by a combination of ac impedance and dc polarization measurements 40,41 . The ac impedance measurement of the Li/XPEEC-1/Li cell (Fig. 6a) was used to determine the initial interfacial resistance. A small dc potential was then applied to the cell, and the current was monitored as a function of time until a steady-state current was established, as depicted in Fig. 6b. The steady-state interfacial resistance of the cell was again determined via an ac impedance measurement, as shown in Fig. 6a. From the data in Fig. 6a and b, the lithium transference number in XPEEC-1 was calculated to be 0.40, indicating that the mobility of the Li + ions is slightly lower than that of the anions. This is due to the fact that the Li + ions are strongly coordinated by the polymer chains through ion-dipole interactions, while the anions are loosely associated with the polymer segments, allowing them to be displaced more readily under an electric field. It is noticeable that the value of t + in XPEEC-1 was much higher than one (0.16) measured in the PEO-based polymer electrolyte. As Tominaga et al. previously reported, the migration of Li + ions can be decoupled from the segmental dynamics in the ethylene carbonate unit, resulting in an increased lithium transference number 30 . These results suggest that the introduction of ethylene carbonate units into the polymer backbone can increase the lithium transference number in the polymer electrolyte. Figure 7a shows the charge and discharge curves of the Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cell assembled with cross-linked PEEC electrolyte at 25 °C. The cell delivered an initial discharge capacity of 141.4 mAh g −1 , based on the LiNi 0.6 Co 0.2 Mn 0.2 O 2 material in the positive electrode. The coulombic efficiency was initially 94.8%, which steadily increased with cycling to reach a constant value of over 99.5% throughout cycling after the initial few cycles. Figure 7b shows the discharge capacities and coulombic efficiencies of the cells assembled with the XPEEC-1 (at 25 °C) and PEO-based electrolytes (at 55 °C), respectively. It should be noted that the cell with PEO-based polymer electrolyte could not operate at room temperature due to the high resistance of the polymer electrolyte. The discharge capacity of the cell with XPEEC-1 decreased from 141.4 mAh g −1 to 127.6 mAh g −1 at the 100th cycle, which corresponds to 90.2% of the initial discharge capacity. In contrast, the discharge capacity of the cell assembled with the PEO-based electrolyte decreased from an initial discharge capacity of 136.2 mAh g −1 to 52.2 mAh g −1 at the 100th cycle, corresponding to 38.3% of the initial value. It is noticeable that the cell assembled with XPEEC-1 electrolyte exhibited higher discharge capacity and coulombic efficiency than the cell with the PEO-based electrolyte throughout cycling. The enhanced cycling performance of the cell with the cross-linked PEEC electrolyte is caused by the higher lithium transference number and better electrochemical stability. The high t + value in the polymer electrolyte results in high lithium ion conductivity and less polarization of the cell potential 41 . With respect to the cycling stability, a layer-structured cathode material, such as LiNi 0.6 Co 0.2 Mn 0.2 O 2 , can easily oxidize the oxyethylene group in the PEO-based polymer electrolyte. Accordingly, the cell with the PEO-based polymer electrolyte exhibited a large capacity decline and low coulombic efficiency, which was caused by the oxidative decomposition of PEO at high voltage during repeated charge-discharge cycles 42 . In the cell assembled with XPEEC-1, the carbonate unit in the PEEC backbone and high salt concentration can improve the oxidative stability of the solid polymer electrolyte. Super-concentrated electrolytes with enhanced electrochemical stability have also been reported by some research groups 43,44 . The highly-adhesive properties of the PEEC-based polymer electrolyte also allowed it to maintain good interfacial contact with the electrodes during the charge and discharge processes. These results suggest that using the cross-linked PEEC-based solid electrolyte enables all-solid-state lithium cells to operate at ambient temperatures. Figure 7c shows the discharge capacities of the Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cell assembled with XPEEC-1, during experiments in which the C rate was increased from 0.1 to 1.0 C every five cycles. The discharge capacities gradually decreased as the C rate was increased, thereby demonstrating polarization. Thus, both ionic conductivity of polymer electrolyte and lithium ion diffusivity in the positive electrode should be further improved to obtain good rate capability. More systematic studies related to the optimization of the solid polymer electrolyte and the proper design of electrodes suitable for
Conclusion
Here we reported a novel solid polymer electrolytes based on PEEC as an alternative to the common PEO-based solid electrolytes for all-solid-state lithium battery. Owing to the amorphous nature of the polymer matrix, the Li + ions have a higher degree of mobility as can be judged from high ionic conductivity and transference number. As a result, the solid electrolyte can successfully perform at the room temperature, which is a major issue in the battery application of solid polymer electrolytes. The presence of ethylene carbonate units within the polymer backbone facilitates the transport of Li + ions and widens the stable potential window. The solid polymer electrolyte was successfully used in the fabrication of a cell made of Li anode and LiNi 0.6 Co 0.2 Mn 0.2 O 2 cathode, which exhibited good cycling performance at room temperature. All-solid-state Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cell delivered a high initial discharge capacity of 141.4 mAhg −1 and exhibited good capacity retention with high coulombic efficiency at 25 °C, thereby demonstrating room temperature operation of this solid-state lithium cell using a solid polymer electrolyte. 567 mol) of EC and DBTDA was placed in an oil bath. After polymerization, the polymer was purified by filtration though a glass frit funnel to remove insoluble catalyst residues, and the filtrate was dissolved in chloroform. This was followed by precipitation in an excess of methanol. The methanol layer was decanted, and the oily residue was rinsed several times with methanol. The polymer dissolved in dichloromethane was eluted though a silica gel column. After solvent evaporation, the transparent and yellowish PEEC could be obtained as a final polymer product. Table 2. The solution was stirred well and cast on a Teflon plate using a doctor blade. The solvent was then allowed to slowly evaporate at room temperature. The resulting film was further dried in a vacuum oven at 60 °C for at least 24 h. In order to prepare the cross-linked polymer electrolyte, 10 wt.% of TEGDA and a catalytic amount of HMPP (0.2 wt.% of TEGDA) were added into the above solution, these served as a cross-linking agent and photo-initiator, respectively. The cast mixture on a Teflon plate was exposed to UV light (254 nm) for 10 min in order to induce the photo cross-linking reaction. The resulting cross-linked polymer electrolyte was dried under vacuum at 80 °C for 24 h. All of the preparation procedures were performed in a glove box filled with argon gas.
Materials
Characterizations. 1 H and 13 C NMR spectra were recorded in CDCl 3 with a tetramethylsilane (TMS) reference using an Avance-500 MHz NMR spectrometer (Bruker) at room temperature. The attenuated total reflection Fourier transform infrared (ATR-FTIR) spectra of PEEC and PEEC-based polymer electrolytes were obtained on a Nicolet iS50 Fourier transform infrared spectrometer (Thermo Scientific) in the wavenumber range of 400 to 4000 cm −1 . The average molecular weights and polydispersity indices (PDIs) of PEEC polymers were measured by gel permeation chromatography (GPC, Waters 1515) equipped with three columns in series (i.e., Styragel ® HR 1 THF, Styragel ® HR 4E THF and Styragel ® HR 5E THF). The system with a refractive index (RI) detector was calibrated using polystyrene standards. HPLC-grade THF was used as an eluent. Differential scanning calorimetry (DSC) measurements were carried out to examine the thermal transition behavior of the PEEC polymer and PEEC-based polymer electrolytes using a TA instrument (SDT Q600/DSC Q20) at a heating rate of 5 °C min −1 in the temperature range from −80 to 80 °C under a dry nitrogen atmosphere. TGA was performed using a TGA analyzer (SDT Q600, TA Instrument) in the temperature range from 30 to 500 °C at a heating rate of 10 °C min −1 .
Electrode preparation and cell assembly. The composite positive electrode was prepared by coating an NMP-based slurry containing LiNi 0.6 Co 0.2 Mn 0.2 O 2 , PEEC, LiTFSI, poly(vinylidene fluoride) (PVdF) and Super P carbon (70: 1.86: 8.14: 5: 15 by weight) onto Al foil. PEEC was used as a Li + ion conductor as well as a binder in the composite positive electrode. The electrode was dried under vacuum for 12 h at 110 °C and then roll pressed to enhance particulate contact and adhesion to the current collector. The active mass loading in the positive electrode was about 4.2 mg cm −2 . The negative lithium electrode consisted of a 200-μm-thick lithium foil (Honjo Metal Co., Ltd.) that was pressed onto a copper current collector. A solid-state Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cell was a ssembled by sandwiching the solid polymer electrolyte (XPEEC-1 or PEO-16) between the negative lithium electrode and the positive LiNi 0.6 Co 0.2 Mn 0.2 O 2 electrode, as schematically presented in Fig. 8. After cell assembly, the cells were kept at 55 °C for 24 h in order to promote interfacial contact between the solid polymer electrolyte and the positive LiNi 0.6 Co 0.2 Mn 0.2 O 2 electrode. All of the cells were assembled in a glove box filled with argon gas.
Electrochemical measurements. For ionic conductivity measurements, solid polymer electrolytes were sandwiched between two disk-like stainless steel electrodes. AC impedance measurements were carried out using a Zahner Electrik IM6 impedance analyzer over the frequency range from 10 Hz to 100 kHz with an amplitude of 10 mV at different temperatures. Each sample was allowed to equilibrate for 1 h at the required temperature before measurement. Linear sweep voltammetry (LSV) experiments were performed to investigate the electrochemical stability of the polymer electrolytes on a platinum working electrode, with counter and reference electrodes of lithium metal, at a scanning rate of 1.0 m V s −1 and 55 °C. The lithium ion transference number (t + ) in the solid polymer electrolytes was measured in the Li/solid polymer electrolyte/Li cell by using a combination of ac impedance and dc polarization measurements at 55 °C 40,41 . For interfacial resistance measurements, solid polymer electrolyte was sandwiched between two lithium electrodes and sealed in coin cells. AC impedance measurements were performed in the frequency range from 100 mHz to 100 kHz at 55 °C. Charge and discharge cycling tests of the solid-state Li/LiNi 0.6 Co 0.2 Mn 0.2 O 2 cells were conducted at a constant current rate of 0.1 C in the voltage range of 3.0 to 4.3 V using battery test equipment (WBCS 3000, Wonatech) at 25 °C. | 5,427.8 | 2017-12-13T00:00:00.000 | [
"Materials Science"
] |
A SEMI-ANALYTIC EIGENVALUE EXTENSION TO THE DOPPLER SLAB ANALYTIC BENCHMARK1
Advancement in multiphysics simulation has motivated interest in availability of analytic and semi-analytic benchmark solutions. These solutions are sought because they can be used to assess the accuracy of complicated numerical schemes necessary to simulate coupled physics systems. While there exist analytic solutions for fixed-source problems, benchmark-quality eigenvalue solutions are of interest because eigenvalue problems more closely align with analyses undertaken with coupled solvers. This paper extends a fixed-source benchmark, the Doppler Slab benchmark, to the eigenvalue case. A novel solution for this benchmark is derived. Numerical implementation of the benchmark is demonstrated through verification of numerical computation of the power reactivity coefficient.
INTRODUCTION
Interest in the characterization of the stability and the convergence characteristics of coupled-physics simulations for nuclear engineering applications has motivated the development of analytic, semi-analytic, and method of manufactured solution benchmarks, such as [1][2][3][4][5][6]. One such benchmark recently developed is the Doppler Slab benchmark [7], which couples neutron transport with thermal conduction physics via Doppler broadening. However, like most multiphysics analytic benchmarks, the radiation transport in the original Doppler Slab benchmark was driven by a fixed source. While the utility of these benchmarks is in their ability to highlight convergence behavior and provide verification of code implementations of established methods, eigenvalue benchmarks are desirable because they more accurately represent multiphysics problems of interest. In particular, certain quantities in eigenvalue calculations, such as reactivity coefficients, are sought that are fundamentally different than those computed in fixed-source problems.
There are few previous works that extend analytic benchmarks to the eigenvalue case. One example of such a benchmark is the eigenvalue Candlestick depletion benchmark by Kooreman and Griesheimer [3], which coupled neutron transport and depletion. Another example is Gonzales et al. [6], which derived analytic expressions for eigenvalue and for an infinite medium problem. While each of these benchmarks' applicability to their relevant physics considerations has been demonstrated, the utility of Kooreman and Griesheimer's work is twofold. Their methodology for extending a fixed-source problem to an eigenvalue problem can be extended to other benchmark applications. By posing an eigenvalue problem with the same assumptions presented in [3], an eigenvalue problem can be made to "look" like a fixed-source problem. If the solution to this fixed source problem is known, it can be used to construct a solution to the eigenvalue problem.
Using this approach, this paper extends the Doppler Slab benchmark to an eigenvalue problem. The solution can best be characterized as semi-analytic rather than fully analytic because expressions for flux and eigenvalue are dependent upon a parameter that must be found via root-finding algorithms. However, since these techniques can give results to any desired precision, the eigenvalue extension of the Doppler Slab problem is a high-precision benchmark applicable to neutron transport codes coupled with thermal conduction feedback. Further, the utility of extending this benchmark to the eigenvalue case is demonstrated through the calculation of reactivity coefficients. A coupled code system with the in-house Monte Carlo code MC21 [8] and an in-house thermal conduction solver is used to numerically evaluate the benchmark through evaluation of the power reactivity coefficient via a brute-force method.
BENCHMARK SPECIFICATION
To extend the Doppler Slab benchmark to the eigenvalue case, we modify the physical conditions of the benchmark given in [7], by making similar assumptions to those made in [3]. Consider the two energy group neutron transport in a one-dimensional homogeneous slab of purely-absorbing fuel occupying = [0, ] with cross section Σ = Σ in the thermal group, no interactions in the fast group, and thermal conductivity . To the left of the slab is a perfect moderator material with constant temperature , such that all exiting fast neutrons on the left side of the slab re-enter the fuel slab as thermal neutrons. The right of the slab is a vacuum, and the interface between the fuel and vacuum is a perfect thermal insulator with = 0. The geometry for the problem is given in Figure 1. While Kooreman and Griesheimer were interested in a coupled neutron transport-depletion problem, we analyze the case of neutron transport with thermal feedback. In the fixed-source version of the Doppler Slab benchmark, two Doppler feedback mechanisms were considered. However, only inverse root feedback (in the language of the previous paper) is considered here. This feedback is given by Eq. (1): In Eq. (1), Σ is macroscopic cross section as a function of spatial coordinate , Σ is the unperturbed (reference) macroscopic cross section, and is some reference temperature (for simplicity, is assumed to be the temperature at the left-hand boundary of the slab). If neutrons are restricted to travel only on the -axis and scatter isotropically -what we call the "bisotropic" approximation -and we assume that the boundary between the fuel and the vacuum is perfectly insulated, then the governing equations for this problem are given by In Eq. (2), ! is the fast left-moving flux, & ' is the thermal right-moving flux, is thermal conductivity, ( is energy released at a fission event, and % is the slab eigenvalue. To obtain a solution, we proceed as in [3] by observing that the bottom two equations in Eq. (2) are similar to those in the original Doppler Slab benchmark specification with the exception of the boundary conditions. We require that the flux boundary condition at the left-hand side of the slab to be a constant and well-characterized; the right-hand temperature boundary condition will be addressed later. To characterize the left-hand boundary condition, first we note that the fast flux can be found by direct integration to be We also note that the power . of the slab is given by the equation With Eqs. (4) and (5), the left-hand boundary condition becomes well-specified: We are then to solve Equation (7) is a fixed-source problem rather than an eigenvalue problem. Further, if the solution to Eq. (7) is known, then the solution to the eigenvalue problem can be constructed. The derivation for the semi-analytic solution for this problem is similar to the solution to the similar fixed-source problem in [7] until boundary conditions are considered. We begin with the following substitutions: With these substitutions as well as the utilization of Eq. (1) for Doppler feedback, the dimensionless equation for neutronics is The dimensionless equation for the thermal physics is Using arguments similar to those given in [7], Eqs. (9) and (10) can be combined to yield a nonlinear integro-differential equation for 3, where > is an integration constant. Applying the variable substitution to Eq. (11) results in the nonlinear initial value problem which has the implicit solution The integration constant > can be found by applying the Neumann boundary condition, which gives With the solution to Eq. (17), > is known, and the benchmark solution can be constructed. It is convenient to say that the solution to Eq. (13) is ?(0), where ?(0) is defined as the inverse of the function By combining the expression for & ' and Eq. (3), the solution for ! is given as By utilizing the boundary condition & ' (0) = ! (0), the slab eigenvalue is given by With Eqs. (20)-(22), a solution to the benchmark is obtained.
One should note that this benchmark is semi-analytic rather than fully analytic. Indeed, the numerical implementation of the V-function represents a semi-analytic result. Furthermore, the V-function depends on the value of 4 through the solution of Eq. (17), while 4 is dependent upon the eigenvalue through However, by combining Eqs. (17), (22), and (23), an equation for H that explicitly only depends on problem inputs Σ , ., , , and is obtained: Once H is obtained, % can be determined via Eq. (22) and 4 can be determined via Eq. (23).
It is also possible to generate expressions for derivatives of % with respect to problem input parameters.
While for this problem, the input parameters are ., , , Σ , and , the focus in this paper is The expression in Eq. (25) is the power reactivity coefficient for the eigenvalue problem.
NUMERICAL IMPLEMENTATION
The solution presented in Section 2 is semi-analytic and is applicable for any physically meaningful input parameters. However, it is useful for the purposes of benchmarking numerical tools to dictate "canonical" values for the benchmark. These values were chosen such that desired thermal feedback of the benchmark is observed and that common behavior between numerical implementations can be easily verified. The canonical values for the benchmark are given in Table 1. For the values given in Table 1, we observe about a 20% change in cross section over the length of the slab. To demonstrate the numerical implementation of the benchmark, a computational model was constructed using MC21 and an in-house thermal conduction solver. MC21 was used to compute the flux distribution, and the thermal conduction solver determined the temperature distribution. The MC21 model consisted of contiguous fuel and moderator slabs with dimensions 3 × 1 × 1 cm and 5 × 1 × 1 cm, respectively. The fuel material consisted of a fictitious purely-fissioning nuclide with p For this exercise, the power reactivity coefficient is computed via a finite-difference approximation: where % ] ƒ is the eigenvalue evaluated at power . to be made, . and . & should be sufficiently close together. Therefore, one should expect the eigenvalues % ] and % ]& to be relatively close numerically as well. Because of these expectations, a large number of particle histories should be run for calculations to estimate power reactivity coefficient so that we may ensure that the estimates statistically distinct, that is, their 95% confidence intervals (CIs) do not overlap.
A running strategy of 50 discard and 10,000 active batches of 2 million particle histories per batch was used for all MC21 calculations. When running simulations, it was observed that apparent eigenvalue convergence is reached within 5 iterations, so only 5 feedback iterations were run in the calculations for this exercise. For all calculations, Robbins-Monro relaxation was used, as in [7]. To estimate power reactivity coefficient, . was 2 W, and . & was 2.01 W. The results for this exercise are given in Table 2. For all cases, the value for power at which the reactivity coefficient was calculated was . & . As seen in Table 2, the estimates for numerical eigenvalues for each power agree within less than 1 pcm with the benchmark eigenvalues and are within the 95% CI for each calculation. The numerical estimation for the power reactivity coefficient also agrees with the benchmark value to within its 95% CI. However, of note is that even though 20 billion active particle histories were run at each feedback iteration and the uncertainties of % ] ƒ and % ] ‚ are 2 pcm, the relative 95% CI of the numerical power reactivity coefficient is greater than 50% of its absolute value. This large relative uncertainty on the estimated power reactivity coefficient highlights the fact that accurate calculations of reactivity coefficients via brute force Monte Carlo methods are notoriously difficult due to the statistical uncertainty in the individual eigenvalue results. The reference power reactivity result for the Doppler slab eigenvalue benchmark can also be used to verify alternative reactivity estimation techniques, such as Monte Carlo perturbation methods. Unfortunately, a detailed examination and comparison of reactivity estimation methods is beyond the scope of this work.
CONCLUSIONS
The Doppler Slab benchmark initially described in [7] has been extended to an eigenvalue problem. The eigenvalue extension was solved semi-analytically with inverse-root Doppler broadening to yield an expression for eigenvalue (% " ) as a function of the boundary and initial conditions of the problem. Like the fixed-source version of the benchmark, the problem is described with simple geometry and is straightforward for numerical evaluation. Because of this, the eigenvalue extension to the Doppler Slab benchmark is an effective tool for benchmarking coupled thermal conduction/neutron transport solvers for quasistatic multiphysics calculations. In addition, an analytical expression for the power reactivity coefficient has been established for the Doppler slab eigenvalue benchmark problem. This analytical reactivity solution is valuable for verifying reactivity estimation techniques such as Monte Carlo perturbation methods. Numerical results produced with the MC21 Monte Carlo transport solver coupled with a simple thermal conductivity solver show agreement with the analytical benchmark solution for eigenvalue and power reactivity, as expected. | 2,874.2 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
A New Technique for Placement of Blocking Screws and its Mechanical Effect on Stability of Tibia Fractures with Distal Fragments after Insertion of Small‐Diameter Intramedullary Nails
Objective To design a novel blocking screws (BSs) geometry and insertion method to treat distal tibia fracture with nailing and comparison of mechanical properties of novel and traditional screws. Methods Twenty‐one synthetic left tibiae were sectioned to obtain 21 distal segments measuring 55 mm. Intramedullary (IM) 9‐mm tibial nails were advanced to 6 mm from the ankle joint. Two transverse and one anterior–posterior (AP) locking screws were inserted. Both medial–lateral (ML) BSs were placed 10 mm from the topmost interlocking screw. A custom‐made jig assisted in placing the novel and traditional BSs. The time spent in placing each BS was recorded. All the samples were repaired with an IM nail and without BSs, with two traditional BSs, and with two novel BSs. An initial loading from −150 to +150 N was applied to specimens in the ML direction at 185 mm from the nail end, followed by cyclic loading of the same for 10,000 cycles with failure‐to‐test loading of 350 N in the ML direction. The maximum displacement was measured at 80 mm from the nail end and recorded under initial loading. The damage of two kinds of BSs to the nail was recorded. Results Compared with average 5.21 min of the time of placing a traditional BS, the time spent in positioning a novel BS on the fracture model was 2.53 min. In the distal bone–implant constructs (BICs), the addition of traditional BSs decreased the maximum displacement of the BICs by 26.2%. The addition of the novel BSs decreased the displacement by 28.9%. All constructs survived 10,000 cycles without hardware deformation. The failure rate of the control group was significantly greater than that of the traditional group; however, the novel group was similar to the traditional group. The damage of the traditional BS to the nail was greater than that of the novel one. Conclusions The novel and traditional BSs are comparably effective for increasing the primary mechanical stability of distal metaphyseal fractures after nailin. However, compared to the placement of a traditional BS, implanting a novel BS took more less time and caused less damage to the nail. Additionally, the most obvious advantage of the novel BS design and insertion technology was that the pressure and distance between it and the IM nail could be controlled by rotating the screw. These advantages of the novel BS will be beneficial for clinical application.
Introduction
D istal tibia fractures are treated by various methods such as intramedullary (IM) nailing, plating, and external fixation, but an optimal treatment technique has not yet been established for clinical application 1 . Application of the IM nailing method decreases the risk of soft tissue complications compared with the use of the plate fixation method 2,3 . However, the reported rate of post-operational malalignment during IM nailing has reached 14%-23% 4-6 . This complication is often followed by the insufficient mechanical stability of the IM nails, owing to distal widening of the medullary canal and the low support strength of the small-diameter tibial IM nail [4][5][6] . To improve the effectiveness of IM nailing for treating distal tibial fracture, various techniques such as external fixation 7 , fibular fixing with a plate 8 to hold the alignment, and assistant nail fixation have been employed. Another common tool that is used to improve the reduction and fixation is a blocking screw (BS) 9 .
Krettek et al. were the first to propose the use of a BS to assist the fixation of metaphyseal fractures for increased bonenail stiffness 10 . By narrowing the medullary canal in the metaphyseal or flared segment of the bone, a BS enables the stability of bone-nail constructs. In a mechanical study, Krettek et al. 10 reported that the addition of BSs in the proximal tibial fracture model reduced the displacement of the bone-nail complex by 25% under the ML direction of loading. In the distal tibial fractures, addition ML BSs increased the stiffness of bone-nail constructs by 57%. Their results demonstrated that in order to increase the stability of the bone-nail constructure, the placement of the BS should be as close to the fracture site as possible. BSs are predominantly used for femur and tibia fractures at the metaphyseal-diaphyseal junction to assist fracture reduction and stabilize the bone-implant construct through the provision of a third fixation point [9][10][11][12][13][14] . A systematic review containing 13 studies with a total of 371 participants and 376 fractures showed that, compared with nailing alone, IM nailing with a BS has lower rates of nonunion and coronal malalignment when treating metaphyseal fractures 15 . Meanwhile, additional BSs can also decrease tibial callus formation 16 owing to increased bone-nail construct stability while treating the delayed union of proximal tibial shaft fractures via nailing 17 . These important advantages mainly depend on accurate BS positioning 10,[12][13][14][15][16][17][18] . Accurate placement also enables three-point fixation principles, which help to overcome the mismatch of bone and nail at the diaphyseal-metaphyseal junction or the metaphyseal that is responsible for the associated axial displacement.
Studies have been conducted to investigate the proper placement of BSs, such as at an acute angle to the flared segment between the long axis of the displaced fracture fragments and aligned with the plane of the fracture 19 , the opposite side of the thumbs 20 , and the pre-use of a Steinmann pin 21 . However, these studies only highlighted the area of BS positioning. The concrete point of BS placement still relies on a surgeon's experience alongside x-ray fluoroscopy during operation. In short, it is very difficult to implant a satisfactory BS using current techniques and methods. To enable the BS to provide accurate reduction and stability, it is often necessary to adjust its position repeatedly during the operation. However, multiple freehand adjustments prolong the operative time and increase the risk of nail damage, bleeding, loss of reduction, infection, and even new fractures 22 . The adjusting technique of BSs is therefore essential for achieving the maximum benefit of their use and requires an effective adjustment strategy. Hence, this important limitation of the operative tuning of traditional BSs requires improvements to simplify their clinical applications.
The first objective of this study is to describe the geometric construct of a novel BS. The novel BS geometry is very simple and improves the traditional BS screw tip by cutting it into a flat end. The second objective of this study is to introduce the placement method of a novel BS to improve its clinical application. The placement is parallel to the deformity of the fracture in a concave plane unlike the traditional perpendicular method. The point connecting with the nail of the flat end is its lateral plane. BS positioning adjustments can then be obtained by turning the BS instead of replacing it. This thread-controlling adjustment strategy for metaphyseal fractures makes BS adjusting quite easy and avoids the need to accurately determine the entrance of the BS while avoiding additional injuries to soft tissues. However, it is unclear whether the mechanical property of the novel BS satisfies the need to enhance the BICs stability. Hence, the third objective of this study is to compare the mechanical stiffnesses of the two methods to supplement distal tibial metaphyseal fractures using small-diameter IM nail fixations. Our null hypothesis states that the mechanical properties of additional BSs will be better than no BS; however, there will be no differences between the novel and traditional groups.
Materials and Groups
A synthetic tissue surrogate with identical geometry and homogeneous material properties was selected. Twenty-one (n = 21) fourth-generation composite Sawbones left tibiae with solid cancellous foam (Model 3401; Pacific Research Laboratories, Vashon, WA, USA) and an expert tibial nail (nail diameter = 9 mm; IRENE, Tianjin, China) were used for the investigation. Previous studies have confirmed that, compared with human bone, surrogates produce remarkable results for axial, compression, torsional, and bending stiffnesses, as well as for failure mechanisms under different loading conditions [23][24][25][26][27][28] . Three 4.2-mm-diameter bicortical locking screws, used in all specimens, were combined with two 3.5-mm-diameter cortical screws that were employed as BSs in seven (n = 7) tibiae per treatment group. The treatment groups were constructs without any BS except for the preplanned screw path (control group), those with two bicortical traditional BSs placed in the anteroposterior (AP) position (traditional group) (Fig. 1A), and those with two semi-cortical novel BSs placed in the ML direction (novel group) (Fig. 1B).
Fracture Model and Instrumentation
The 9-mm IM nail was inserted in an unreamed fashion using a standard technique 29 . An unstable distal tibial fracture was simulated by cutting the distal tibial segments at a distance of 55 mm from the tibial plafond in all specimens. The solid tibial nails were advanced to a point 6 mm from the ankle joint. The BSs were placed approximately 10 mm from the most proximal locking screw holes in accordance with the work of Krettek et al. 10 (Fig. 2). The large difference between the diameters of the implant and the medullary cavity of the metaphysis was simulated in our model. Two ML BSs were inserted to avoid fracture displacement in the frontal plane. For the traditional BS, a bicortical hole was drilled with a 2.5-mm bit using a custom-made jig. A fully threaded 3.5-mm cortical screw was placed on a two-sided nail 10 mm from the proximal inter-locking screw in the AP direction (Fig. 3A). The novel BS was made from a 3.5-mm fully cortical threaded screw. The tail end of the 3.5-mm cortical screw was cut and ground to a flat surface (Fig. 4). The retaining length of the new BSs was determined based on the distance between the outer cortex and the surface of the nail. The location was aligned to the connective line between the two ends of the distal traverse locking screws at 10 mm from the proximal locking screw. The outer cortex was drilled using a 2.5-mm bit. Two novel BSs were placed on the ML side of the nail with the assistance of a custom-made jig. When the end of the novel BSs touched the nail (Fig. 3B), they were further tightened with a screwdriver for half a unit circle to increase pressure between the nail and the BSs. Thus, the novel BS resulted in a modified tuning technique for positioning. The time required to place every BS was recorded. The consumed time of placing a BS was compared between the two groups, and the marker left on the nail by the two placement methods was recorded. The distal BICs were then embedded in the bone cement in a cast frame.
Mechanical Testing
Loads ranging between À150 and +150 N (one-third of the body weight of a 45.9-kg person) were applied in the ML direction at 185 mm from the nail end after BIC fixation in a material testing machine 10 . Using a laser distance sensor, the maximum displacement was determined to be 80 mm from the end of the nail. The average maximum deformation and standard deviation were then calculated according to the method of Krettek et al. 10 . Assessments were performed by a senior orthopedic surgeon and the present author. The constructs that survived initial loading were then tested under cyclical loading. The instrumented constructs were then fixed laterally onto the pole of the load frame (MTS Mini Bionix. II, Model 359, MTS Systems Corp, Eden Prairie, MN, USA) using custom-designed fixtures. Another custom-designed fixture was rigidly installed to each nail at 185 mm from its end. Through this fixture, the nail was coupled to the actuator under the condition of the axis of the nail being parallel to the horizontal plane, which ensured that the direction of loading was ML. A loading from À150 to +150 N was then applied at a frequency of 1 Hz for 10,000 cycles. The BICs that tolerated cyclical testing were finally loaded at failure loading of 350 N (body weight of a 35.7-kg person) in the ML direction.
Maximum Displacement
The maximum displacement was used to evaluate the stiffnesses of the BICs. The BIC stiffness is an interesting mechanical parameter and is defined as the slope of the force versus displacement curve. Under the same loading, the smaller was the maximum displacement, the greater was the stiffness.
Cyclical Loading and Failure to Test As mentioned earlier, loading from À150 to +150 N was then applied at a frequency of 1 Hz for 10,000 cycles in the ML direction, which represents the approximate number of steps taken over a 4-6 weeks period; i.e., the estimated interval for postoperative non-weight bearing 30 . This test refers to the method of Hoenig et al. 31 and was used to evaluate results of the fatigue test of BICs. To our knowledge, there is no data reference regarding the failure to test of a biomechanical experiment in the ML direction. The data of 350-N loading was therefore based on the failure of more than half of the samples in the control group from our preexperiment. Failure was defined as catastrophic, manifesting as a bone fracture, loosened nail or bending, or other gross hardware breakage (Fig. 5).
Time Spent in Placing a BS
We used the consumed time to compare the difficulty of placing a respective traditional and novel BS. The consumed time is an indirect index to evaluate the possible subsequent surgical complications.
Damage to a Nail
The damage to a nail is defined as the mark left on a nail by the friction of the drilling head and cutting of the screw thread. This indicator was used to evaluate the destruction to a nail when placing a traditional or novel BS.
Statistical Analysis
All data were initially tested for normality distribution using the Shapiro-Wilk statistical test. One-way ANOVA tests (two-tailed) and a least-significant-difference post-hoc test were used to compare the results within the group. The measurements included time spent in placing a BS and displacement of the nail. Two independent sample t-tests were used to compare the placing time of the novel and traditional BSs. The difference was considered to be significant when the P value was less than 0.05.
Results
Maximum Displacement of Nail under Transverse Loading From À150 to +150 N In the distal BICs, the addition of traditional BSs decreased the maximum displacement of the BIC by 26.2%, from 4.88 AE 1.20 mm (mean AE standard deviation) in the control group to 3.60 AE 0.72 mm in the traditional BS group (F = 5.004, P = 0.018). Compared with 4.88 AE 1.20 mm (mean AE standard deviation) in the control group, the maximum displacement of the novel group was 3.47 AE 0.77 mm, which decreased by 28.9% (F = 5.004, P = 0.010). A comparison of the traditional and novel groups showed that they were not statistically significant. The increased stability of BIC in the novel BS was more than 2.7% greater than that of the traditional group, although no significant difference was observed.
Cyclical Loading and Failure to Test
All specimens in the three groups survived initial and cyclic loading. Under a loading of 350 N, failure in the control group resulted in the nail seriously loosening in two specimens, one fracture, and nail bending in two specimens. Two of the seven specimens in the traditional group failed and distributed into a new fracture. The incidence rate and distribution of failures in the novel group was similar to that of the traditional group (Fig. 5). The breakage of the nail and locking screw was not found until all experimental procedures were complete. In all experimental groups, especially the novel group, no backout or breakage of the BS was found. Interestingly, new fractures mainly occurred along the interface between the BICs and bone cement, and four of five new fractures occurred in all groups. All nail loosening and bending occurred in the control group.
Time Spent in Placing a BS
The average time needed to place a single BS on the fracture model in the new BS group was 2.53 min (range: 1.8-3.2 min), which was less than 5.12 min (range: 4.2-6.1 min) in the traditional group (t = À7.798, P < 0.001). Time spent in the placement of the traditional BS required nearly twice as much time as the novel group.
Damage to the Nail When Inserting BS Slight damage occurred at the contact part between the BS and the nail in the novel BS group (Fig. 6A); however, more serious damage occurred in the traditional group (Fig. 6B). A longer time required by the drill bit to drive the nail and the cortical screw in the traditional group was observed compared with the novel group, owing to the differences of insertion methods:
Difference in the Method of Adjusting the Distance
The orientation of the traditional BS placement was perpendicular to the plane of the fracture deformity to enable the reduction and fixation of the flared segment. Thus, the effect of BS reduction and fixing the fracture segment was determined by its entry. The current difficulty involved the estimation of an appropriate entry point for placing the BSs. This endeavor remains a challenge for surgeons under conditions of skin shielding and fracture deformity. Although some precise methods for BS insertion have been described in recent literature 11,13,14,[19][20][21] , the extent to which they narrow the IM cavity via the BS depends mainly on the use of inter-operational fluoroscopy and the experience of the surgeon. To obtain a perfect alignment, considerable time is required to adjust the location of the BS. Adjustments to the BS location during operation also prolong the operation time, resulting in bleeding and sometimes additional fracture lines 22 (Fig. 7C). Nevertheless, modifying the positioning strategy of the traditional BS circumvents the clinical need to improve bone fracture healing prognosis. We herein described a novel BS geometry and its placing and adjusting method. The entrance of the novel BS does not need to be precisely located, and the distance between the novel BS and the nail can be controlled via screw-turning. The placement of the novel BS is easier than that of the conventional method, and its placement will not lead to new fractures on account of its tuning method. To comprehensively understand the advantage of the novel BS, it was divided into an anchored end (the part touching the bone cortex) and a functional point (the part touching the nail).
The distance between the nail and the functional point was determined by the anchored end, resulting from the placement of the traditional BS perpendicular to the displacement of the flared segment. The tuning of the BS functional point was achieved via the movement of the anchored end. The current technique used to adjust the traditional BS requires resetting it, which requires even more time and consequential injury. To improve the placement and function of the BS, we modified its placement from a perpendicular position to the fragment movement plane parallel to it, thereby improving the placement and adjustment strategy. Meanwhile, the tip of the screw was designed to be flat to increase the connecting opportunity with the nail (Fig. 4). Koller et al. in 2020 described the clinical use of a fully threaded 3.5-mm cancellous screw as a reduction tool for correcting frontal deformities along the coronal plane 32 . A reduction screw then was replaced with a traditional BS after the desired correction was obtained.
Method of Placing
Our idea of using the BS as a reduction tool is similar to that of Koller et al. 32 , however, it differs in that the novel BS described herein acts as a stabilizing tool. It is placed adjacent to the nail without the need to replace it when the desired reduction is obtained. Before IM insertion, the novel BS can be inserted on the concave side of the deformity closer to the fracture in the coronal plane to reduce fracture segmentation, owing to the greater opportunity of the flat end of the BS to touch the nail. The novel BS can also be placed along the central axis of the IM parallel to the frontal locking screws or along the nail axis oriented as the frontal locking screws. This functions as a stability tool to enhance bone-nail-construct stiffness after nail insertion 10 . The key advantage of this novel BS is that the distance and compression between the nail and BS can be adjusted by turning the screw. Three effective clinical results are obtained through this advantage: the adjustment of the BS location does not need to be reset; nail damage coming from the drill bit and the BS thread is subtler; and secondary fractures caused by BS positioning all but disappear. However, the mechanical stiffness of the novel semicortical BS is unclear, and we hypothesize that it is sufficient to ensure construct stability.
Comparison of Mechanical Properties
This study demonstrates that both the novel transversal and traditional sagittal BSs, when placed adjacent to an IM nail, can increase the primary stability of distal tibia fractures. Using this model, the fracture was stabilized distally by two ML locking screws and one anterior-posterior locking screw. The fracture stiffness of the three locking screws was 26.2% less than that of the traditional BSs. The novel BSs increased construct stiffness up to 28.9%. Nevertheless, compared with the traditional group, no statistical differences were observed. The increased BIC stiffness of the novel BS was 2.7% greater than that of the traditional group. This could possibly lead to complaints from patients regarding the increased pressure resulting from the connection between the BS and nail. The traditional BS can only be used as an occupation screw to centralize the nail. However, the novel BS functions as a compression screw to reduce the nail and increase the construct stiffness. Breakage or backout of the novel screws was not observed until the end of each sample test. This may be explained by the fact that the force between the BS and the nail is less than the anti-pull-out strength of the interface between the bone and the semi-cortical screws under a 150-N load. In 2019, Ketata et al. 33 used a finite element model of the synthetic bone by computed tomography scanning to test the antipull-out strength of the 4.5-mm semi-cortical bone screw. It was determined to be 439 N. This finding is in relation to 4.5-mm screws, indicating that the anti-pull-out strength of 3.5-mm semi-cortical screws can satisfy the mechanical requirements of the BS. Under the same experimental conditions, Krettek et al. 10 validated that the BIC stability could be increased by 57% by placing additional sagittal two-sided BSs, showing more effective results than those of this study. The prior study utilized human cadaveric tibiae and a stainless-steel tibial nail, which may be a critical reason for the difference.
The 350-N loading-failure test resulted in the only new fracture, whereas the control group resulted in considerable nail bending and loosening (Fig. 5C). One impossible explanation is that the increased stiffness of the BICs caused by the additional BS led to the new fracture occurring on the interface between the BIC and bone cement (Fig. 5B), owing to the relatively larger movement of this interface. No BS backout occurred in the novel group for both cyclical loading and intact failure testing, which provides more evidence that the anti-pull-out force of a 3.5-mm semi-cortical screw can satisfy the clinical need.
Comparison of the Time Required
The time required for novel BS insertion required less than 50% of that of the traditional BS. Clinically, the insertion of a traditional BS may require more time on account of the existence of complex soft tissues, fracture displacements, and the lack of precise markers and assistive instruments. Compared with the traditional BS, inserting the new one was easier owing to the use of interlocking as an obvious marker, the tuning of the BS location achieved by turning the screw, and the lack of any need to precisely position it. Moreover, the continuous friction against the nail when using the traditional BS caused damage because of threading and drill cutting. However, the contact between the novel BS and the nail was transient, and only miniscule damage was found in the novel BS group.
Reason Why Only the ML BS Mechanical Properties are Tested
With most tibial nails, two transverse locking screws are implanted to provide stability in the sagittal plane after reduction. However, these locking screws provide less stability in the frontal plane. Therefore, the direction of fracture translation is often along the axis of the locking nail in the frontal plane. For this reason, our model simulated the direction of fracture re-displacement as an ML movement on the distal tibia fracture, which was fixed by nailing it with two transverse and one AP locking screws. Additional BSs were placed on the ML side to avoid translation of the ML interlocking screws. Additionally, the direction of the loading application was in the ML orientation.
Limitations
This study had several limitations. Only ML testing was conducted on the current transverse fracture model. Regarding the novel method for placing BSs, further study is needed to quantify the effects of axial fatigue loading. However, a prior biomechanical study confirmed that a single additional medial BS has no effect on the distal tibia fracture model when nailing with the BS under combined cyclic axial and torsional loads. Additionally, compared with the two distal locking screws, three have significant advantages 34 . Another previous study 35 used finite element analysis to confirm that BS application has no additional effect on the distal tibia fracture model through a comparison of fixation with plating and nailing with the BS. The rigidity of the bone-nail construct depends mainly on the locking screws. The stability of the local fracture segment being enhanced by additional BSs on the condition of axial loading is difficult to support. Furthermore, there is a potential risk of nail damage occurring when the drilling of the BS holes and thread-cutting while placing the BS using a freehand technique. Clinically, only one BS may be used on the side with a fracture. For this study, medial and lateral displacements were tested to recreate a severely unstable fracture model. Therefore, the two BSs in the ML direction were inserted to increase BIC stability.
Conclusions
Based on the results of this mechanical study, we conclude that both the traditional and novel BS techniques increased the primary stability of distal metaphyseal fractures. They exhibited similar results in mechanical tests. However, the novel screw helped alleviate the difficulty of tuning of the BS during operation. The time spent inserting the new BS was significantly shorter than that of the traditional one, and the damage to the nail in the novel group was more subtle than that of the traditional one. Additionally, the obvious advantage of the novel BS is that the distance and pressure between it and the IM nail can be adjusted by turning the screw, which decreases the operation time and avoids the occurrence of new fractures. These advantages provide more benefits for the clinical application of BSs.
Acknowledgments I wish to thank professor Zhong Li and Kun Zhang for their valuable suggestions regarding the manuscript. The successful completion of the experiment and article is a result of the efforts of all the authors, whose contributions are highly appreciated.
Declaration
N one of the authors of this paper have a financial or personal relationship with other people or organizations that could inappropriately influence or bias the content of the paper. | 6,440 | 2021-10-01T00:00:00.000 | [
"Materials Science"
] |
Mixed-Phase (2H and 1T) MoS 2 Catalyst for a Highly Efficient and Stable Si Photocathode
: We describe the direct formation of mixed-phase (1T and 2H) MoS 2 layers on Si as a photocathode via atomic layer deposition (ALD) for application in the photoelectrochemical (PEC) reduction of water to hydrogen. Without typical series-metal interfaces between Si and MoS 2 , our p -Si/SiO x /MoS 2 photocathode showed efficient and stable operation in hydrogen evolution reactions (HERs). The resulting performance could be explained by spatially genuine device architectures in three dimensions (i.e., laterally homo and vertically heterojunction structures). The ALD-grown MoS 2 overlayer with the mixed-phase 1T and 2H homojunction passivates light absorber and surface states and functions as a monolithic structure for effective charge transport within MoS 2 . It is also beneficial in the operation of p - i - n heterojunctions with inhomogeneous barrier heights due to the presence of mixed-phase cocatalysts. The effective barrier heights reached up to 0.8 eV with optimized MoS 2 thicknesses, leading to a 670 mV photovoltage enhancement without employing buried Si p - n junctions. The fast-transient behaviors via light illumination show that the mixed-phase layered chalcogenides can serve as efficient cocatalysts by depinning the Fermi levels at the interfaces. A long-term operation of ~70 h was also demonstrated in a 0.5 M H 2 SO 4 solution.
Introduction
The photoelectrochemical (PEC) splitting of water into oxygen and hydrogen offers green fuel from solar energy. It requires a semiconductor to absorb photons of visible light and to generate excitons, fast charge separation within the depletion layer followed by efficient charge transfer to the electrolyte. Employing thin layers of cocatalysts onto semiconductor surfaces can alter the surface energetics by bending the degree of energy bands and/or charge transfer kinetics [1]. By choosing suitable catalytic systems, moreover, the semiconductor can be protected from the solution environment [2]. Silicon (Si) has been spotlighted as a promising earth-abundant light absorber due to its small band gap of 1.12 eV and its viability in the electronic device industry, especially in the field of solar cells. Since the available photovoltage obtainable from a single Si junction is about 0.5 V and not enough for water splitting, which requires 1.23 V of minimum thermodynamic potential, additional bias should be required during the PEC operation. In the unbiased PEC configurations, therefore, multiple Si-based solar cells and tandem structures with wide-gap semiconductors are needed to boost the water splitting reactions [3]. In this study, we focus on the properties of photocathode as half-cell in the three-electrode configuration. To adapt Si as Recently, we developed an ALD chemistry that can directly produce as-grown crystalline MoS 2 thin films on various substrates at low temperatures (250-300 • C) using inexpensive precursors (i.e., MoCl 5 + H 2 S). ALD techniques have shown an exceptional capability for studying the critical thickness required for optimal HER operation [29]. Here, we describe that mixed-phase (1T and 2H) MoS 2 layers can be formed on Si as photocathodes via ALD for the PEC reduction of water. Note that we did not employ the extra phase-transition procedure from semiconducting 2H to metallic 1T phase. Without utilizing conventional series-metal interfaces, our simple p-Si/SiO x /MoS 2 photocathode showed an efficient and stable operation during HERs. The present devices have spatially genuine architectures in three dimensions (i.e., laterally homo and vertically heterojunction structures). The effective barrier heights reached 0.6 eV with optimized MoS 2 thicknesses, leading to a V ph enhancement of 670 mV without employing buried Si p-n junctions. A long-term operation of 70 h was also demonstrated. We suggest that in general, 1T-2H mixed-phase layered chalcogenides could serve as efficient cocatalysts by depinning Fermi levels at the interfaces, resulting in efficient electron transfer mechanisms.
Results and Discussion
The unique device configuration of our p-Si/SiO x /MoS 2 photocathode contrasts typical MIS photocathodes reported by other researchers (Figure 1). The electron-hole pairs are generated in the depletion region, mainly inside p-Si, and then transferred through the tunnel oxide to the thin metal interfaces toward cocatalysts in the electrolyte. The monolithic homojunctions consist of mixed-phase 1T and 2H MoS 2 that develop during the growth of MoS 2 via ALD. Depending on the degree of distortion in the S-Mo-S atomic planes, 2H and 1T phases exhibit different structural and electronic properties. The 2H phase has semiconducting properties and is more stable compared to other phases (e.g., 3R and 1T), whereas the 1T phase is metallic and unstable. A key mechanism may be the inclusion of an excess chlorine moiety along the MoS 2 basal plane upon incomplete reactions at low temperatures as probed by elemental analysis, X-ray photoelectron spectroscopy, and X-ray diffraction patterns [30]. We also demonstrated that as a result of the presence of Cl, the 1T phase was stable upon thermal annealing at 400 • C. Since the 1T phase of MoS 2 is metallic, the resulting MoS 2 layers as cocatalysts on the Si photocathode exhibit distinctively monolithic homojunctions with changes in the Φ B laterally along the interfaces, as illustrated in Figure 1b. Our p-Si/SiO x /MoS 2 photocathode makes the electron transfer pathway effective compared to a conventional MIS electrode, eliminating additional processes that are required for complicated interface engineering. Note that additional phase-conversion (2H-to-1T) and film-transfer processes were not necessary, thanks to the direct growth of the mixed-phase films on to the observer. MoCl5 + H2S). ALD techniques have shown an exceptional capability for studying the critical thickness required for optimal HER operation [29]. Here, we describe that mixed-phase (1T and 2H) MoS2 layers can be formed on Si as photocathodes via ALD for the PEC reduction of water. Note that we did not employ the extra phase-transition procedure from semiconducting 2H to metallic 1T phase. Without utilizing conventional series-metal interfaces, our simple p-Si/SiOx/MoS2 photocathode showed an efficient and stable operation during HERs. The present devices have spatially genuine architectures in three dimensions (i.e., laterally homo and vertically heterojunction structures). The effective barrier heights reached 0.6 eV with optimized MoS2 thicknesses, leading to a Vph enhancement of 670 mV without employing buried Si p-n junctions. A long-term operation of ~70 h was also demonstrated. We suggest that in general, 1T-2H mixed-phase layered chalcogenides could serve as efficient cocatalysts by depinning Fermi levels at the interfaces, resulting in efficient electron transfer mechanisms.
Results and Discussion
The unique device configuration of our p-Si/SiOx/MoS2 photocathode contrasts typical MIS photocathodes reported by other researchers (Figure 1). The electron-hole pairs are generated in the depletion region, mainly inside p-Si, and then transferred through the tunnel oxide to the thin metal interfaces toward cocatalysts in the electrolyte. The monolithic homojunctions consist of mixed-phase 1T and 2H MoS2 that develop during the growth of MoS2 via ALD. Depending on the degree of distortion in the S-Mo-S atomic planes, 2H and 1T phases exhibit different structural and electronic properties. The 2H phase has semiconducting properties and is more stable compared to other phases (e.g., 3R and 1T), whereas the 1T phase is metallic and unstable. A key mechanism may be the inclusion of an excess chlorine moiety along the MoS2 basal plane upon incomplete reactions at low temperatures as probed by elemental analysis, X-ray photoelectron spectroscopy, and X-ray diffraction patterns [30]. We also demonstrated that as a result of the presence of Cl, the 1T phase was stable upon thermal annealing at 400 °C. Since the 1T phase of MoS2 is metallic, the resulting MoS2 layers as cocatalysts on the Si photocathode exhibit distinctively monolithic homojunctions with changes in the Φ laterally along the interfaces, as illustrated in Figure 1b. Our p-Si/SiOx/MoS2 photocathode makes the electron transfer pathway effective compared to a conventional MIS electrode, eliminating additional processes that are required for complicated interface engineering. Note that additional phase-conversion (2H-to-1T) and film-transfer processes were not necessary, thanks to the direct growth of the mixed-phase films on to the observer. The overall morphologies (Figure 2c-e) and the resulting thicknesses (Figure 2a,b,f,g,k) of MoS 2 on Si surfaces were reproducible as previously reported [29]. The layered-structural nature was shown by the Raman spectra (Figure 2j), and the mixed-phase (1T and 2H) structures were also observed via XPS spectra (Figure 2h,i). Initial cycles up to 100 ALD cycles resulted in several MoS 2 nanoflakes (diameter of~10 nm). Most of the nuclei layers have parallel basal planes to the substrate surfaces, as shown in Figure 2a,f. The TEM of 300-cycled MoS 2 shows more vertically grown MoS 2 layers on the p-Si, as shown in Figure 2f. This growth mechanism was observed in thicker layers (1000 cycles in Figure 2g) and was identical to earlier work by some of the authors [29]. The overall morphologies (Figure 2c-e) and the resulting thicknesses (Figure 2a,b,f,g,k) of MoS2 on Si surfaces were reproducible as previously reported [29]. The layered-structural nature was shown by the Raman spectra (Figure 2j), and the mixed-phase (1T and 2H) structures were also observed via XPS spectra (Figure 2h,i). Initial cycles up to 100 ALD cycles resulted in several MoS2 nanoflakes (diameter of ~10 nm). Most of the nuclei layers have parallel basal planes to the substrate surfaces, as shown in Figure 2a,f. The TEM of 300-cycled MoS2 shows more vertically grown MoS2 layers on the p-Si, as shown in Figure 2f. This growth mechanism was observed in thicker layers (1000 cycles in Figure 2g) and was identical to earlier work by some of the authors [29]. We investigated the HER performance of our p-Si/SiO x /MoS 2 photocathodes as a function of the MoS 2 thickness under illumination (100 mW·cm −2 ), as shown in Figure 3. For a bare p-type silicon treated with HF, a photoelectrochemical reduction process begins at an onset potential of −0.12 V (which is defined when the photocurrent reaches 0.1 mA·cm −2 ) with a J ph of 24 ± 2 mA·cm −2 , which is consistent with other studies [31]. The corresponding measured V ph is shown in Figure 3b, which is 0.11 ± 0.04 V and is much less than the theoretical maximum V ph of Si of 0.48 to 0.5 V [32]. The gradual increase in J ph near the turn-on potential (shown in Figure 3a (solid black line)) indicates the slow kinetics of the charge transfer processes. The decreased V ph and the kinetics are attributed to the presence of surface states, followed by Fermi-level pinning (hereafter called FLP ss ) [33,34] at the surface/electrolyte interface and by the build-up of a surface oxide layer [23] and/or by electron traps by the complex surface chemistries that are associated with the formation of Si-OH bonds [35]. The photocathode produced via a few tens of ALD cycles of MoS 2 (data now shown) deposited on p-Si, which corresponds to less than 5 nm in thickness, showed negligible improvements in the PEC performance. This is partly due to the incomplete coverage of the silicon surface area since, from the initial ALD cycles, a number of seed layers of less than a few nanometer-thick nanoflakes randomly grow on Si. The photocathode produced with 100 ALD cycles, which is~7 nm in average thickness, achieved an enhanced onset potential of 0.27 ± 0.02 V with an increased J ph of 27.5 ± 0.5 mA·cm −2 . As shown in Figure 3b, the difference in the V ph from the bare p-Si corresponds to the anodic onset voltage shift value of the 100-cycle ALD photocathode. The increase in V ph is attributed to the surface states being reduced by the MoS 2 layers. The increase in a J ph compared to bare p-Si is due to the reduced surface recombination, which is one of the limiting factors for photocurrents [36].
As the number of ALD cycle increased to 300, an onset potential shift to 0.24 ± 0.03 V was observed with an increase in the V ph of 0.48 ± 0.01 V. As explained in our previous studies, after a certain number of flakes, additional MoS 2 layers grow in the vertical direction, covering the remaining exposed silicon surface to passivate from the electrolyte. Consequently, the Fermi level effect was almost alleviated, as V ph reached the maximum value for p-Si. The maximum V ph value of 0.80 to 0.67 V, which is much higher than the maximum V ph limit (V 0 ) of p-Si and the J ph of 30 mA·cm −2 , was attained at 500 ALD cycles (30 to 35 nm in average thickness). This is mainly attributed to the formation of a p-i-n heterojunction, including MIS between the p-Si/1T-MoS 2 partially within the bulk films, which will be discussed later in more detail.
The kinetic improvement that was partially due to an exposure of the edge sites is depicted in Figure 3c. The Tafel slope variations of 56 to 124 mV·decade −1 with the function of the number of ALD cycles were obtained by fitting the Tafel plots. Note that the Tafel slope of the 500-ALD-cycled photocathode, 90 mV·decade −1 , was larger than that of the 300-ALD-cycled photocathode, which is 56 mV·decade −1 ; 56 mV·decade −1 for 300-ALD-cycle photocathode indicates the faster charge transfer rate. This value is in between the value of 2H-MoS 2 nanodots (61 mV·decade −1 ) [37] and the value of 1T-MoS 2 (43 mV·decade −1 ) [29] in terms of HER as electrocatalysts.
Our results revealed that the PEC performance with the 500-ALD-cycles photocathode does not necessarily exhibit the highest charge transfer kinetic property. Rather, surface energetics-i.e., a capability of electron/hole charge carrier's separation at near photoelectrodes' interface changed by the presence of a catalyst-can also contribute the PEC performances [38]. Increasing the number of ALD cycles from 600 to 800 degrades the PEC performance and the kinetics as shown in Figure 3a-c, which mostly came from the reduced light transmittance through the thicker MoS 2 layers. As mentioned in previous studies [39,40], the light intensity is necessary to obtain a surface V ph , which turned out to be a function of the penetration depth. As the thickness increases to more than 40 nm, the penetration of light degradation induces a lower generation rate of electron-hole pairs. After 1000 ALD cycles, a 0.26 ± 0.01 V V ph , which is similar or even less than for bare p-Si, was absorbed. An increased electrical and interfacial resistance degrades the PEC performance with the V on voltage of −0.17 V was observed. The summary of important parameters for HER in PECs is shown in Table 1.
Note that our observation of the reduced PEC performance in thicker Cl-doped MoS 2 differs from recent works with MoS x Cl y grown by chemical vapor deposition. −0.17 V was observed. The summary of important parameters for HER in PECs is shown in Table 1. Note that our observation of the reduced PEC performance in thicker Cl-doped MoS2 differs from recent works with MoSxCly grown by chemical vapor deposition. To understand the junction characteristics further, we carried out the Mott-Schottky C-V analysis of our p-Si/SiOx/MoS2 photocathode (Figure 3d). Generally, C-V measurement can be a reliable method for the determination of built-in voltage (Vbi) and the doping density of p-n junction for Schottky junctions interface [41,42]. The C-V characteristics of 100-and 500-ALD-cycle photocathodes have been measured in the dark. Dotted lines are fitted to the experimental data. Vbi To understand the junction characteristics further, we carried out the Mott-Schottky C-V analysis of our p-Si/SiO x /MoS 2 photocathode (Figure 3d). Generally, C-V measurement can be a reliable method for the determination of built-in voltage (V bi ) and the doping density of p-n junction for Schottky junctions interface [41,42]. The C-V characteristics of 100-and 500-ALD-cycle photocathodes have been measured in the dark. Dotted lines are fitted to the experimental data. V bi of the photocathodes can be estimated from the famous Mott-Schottky equation, where the intercepts of the straight lines yield 0.36 and 0.80 V at 200 kHz for 100-ALD cycle and 500-ALD-cycle photocathode, respectively. The slopes also indicate the photocathode of p-type conductivity. Kenny et al. pointed out the catalyst layer thickness dependence on the MIS junction characteristics utilizing partial screening charges and Debye length [43]. Another report by Fujii et al. stated that the thickness of the deposited semiconducting layer on another semiconductor of p-n heterojunction affects the depletion width, thus resulting in the V bi variation [44]. The change in the build-up of the depletion region at the p-Si/MoS 2 (100-and 500-ALD cycles) interface junctions results in the increase of V bi .
Indeed, our p-Si/SiO x /MoS 2 photocathodes regulated the overall energy barriers and impedances of the total junctions. Accordingly, the stable operation under 1 sun illumination at a −0.3 V reverse bias and 0 V vs. RHE is shown in Figure 4a. The stable operation is due to the secure chemical contact formed between the p-Si/MoS 2 interface due to the ALD deposition method, which conformally deposited the layer on the silicon surfaces. Ding et al. experimentally confirmed the effect of a higher quality interface for 1T-MoS 2 /Si by comparing the PEC characteristics of drop-casted 1T and CVD-grown 1T MoS 2 films, showing improved performance with a 0.235 V vs. RHE onset potential for the CVD-grown MoS 2 film on Si [45]. The metallic nanojunctions in our mixed-phase (1T and 2H) MoS 2 layers grown via ALD improved the stability for HER by reducing the contact resistance to Si. Upon stable operation of the p-Si/SiO x /MoS 2 photocathode for 70 h under light, the resulting samples were analyzed by XRD, as shown in Figure 4b. Accordingly, the overall structures remained unchanged. [43]. Another report by Fujii et al. stated that the thickness of the deposited semiconducting layer on another semiconductor of p-n heterojunction affects the depletion width, thus resulting in the Vbi variation [44]. The change in the build-up of the depletion region at the p-Si/MoS2 (100-and 500-ALD cycles) interface junctions results in the increase of Vbi. Indeed, our p-Si/SiOx/MoS2 photocathodes regulated the overall energy barriers and impedances of the total junctions. Accordingly, the stable operation under 1 sun illumination at a −0.3 V reverse bias and 0 V vs. RHE is shown in Figure 4a. The stable operation is due to the secure chemical contact formed between the p-Si/MoS2 interface due to the ALD deposition method, which conformally deposited the layer on the silicon surfaces. Ding et al. experimentally confirmed the effect of a higher quality interface for 1T-MoS2/Si by comparing the PEC characteristics of drop-casted 1T and CVDgrown 1T MoS2 films, showing improved performance with a 0.235 V vs. RHE onset potential for the CVD-grown MoS2 film on Si [45]. The metallic nanojunctions in our mixed-phase (1T and 2H) MoS2 layers grown via ALD improved the stability for HER by reducing the contact resistance to Si. Upon stable operation of the p-Si/SiOx/MoS2 photocathode for 70 h under light, the resulting samples were analyzed by XRD, as shown in Figure 4b. Accordingly, the overall structures remained unchanged. While the overall electronic structure of the photocathode can be depicted as a p-i-n heterojunction, inside the MoS2 film, the islands of embedded 1T phase nanostructures form a Schottky junction with p-Si. In Figure 5, the band alignments of p-Si/SiOx/MoS2 and the electrolyte in the nanoscale domain are drawn to explain an injected electron transport path that is energetically favorable, where Vo denotes the bend bending at 0 bias, Vr is the applied reverse bias, is the Schottky barrier height, Vph is the photovoltage, EF is the Fermi level, and EFn is the quasi-Fermi level for electrons. For the band alignment at p-Si/MoS2 interface, all the necessary parameters in terms of vacuum level and normal hydrogen electrode level in electrolyte of pH = 0.3 are shown in Figure S1, where the p-Si doping level is ~1.6 × 10 15 cm −3 . The injected electron passes from 1T-MoS2 to 2H-MoS2, considering that 1T-MoS2 exhibits a work function of approximately 4.2 eV, [46] which is not much different from that of the electron affinity of 2H MoS2 (4.3 eV), and higher electrical conductivity compared with the 2H phase. Bai et al. studied the 1T/2H MoS2 contact with different contact types and concluded that the edge-contact model exhibits a low tunneling barrier of 0.1 eV and even Ohmic contact in the case of having excess in-plane dangling bonds at the edge of 2H MoS2 [47]. An important interpretation from the J-V characteristics of the PEC performance is that the conditions for bulk junction properties are fulfilled after 500 ALD cycles and the MoS2 layer growth is already in the vertical direction. Therefore, in a situation of a high portion of 1T phase with an ~10 cm While the overall electronic structure of the photocathode can be depicted as a p-i-n heterojunction, inside the MoS 2 film, the islands of embedded 1T phase nanostructures form a Schottky junction with p-Si. In Figure 5, the band alignments of p-Si/SiO x /MoS 2 and the electrolyte in the nanoscale domain are drawn to explain an injected electron transport path that is energetically favorable, where V o denotes the bend bending at 0 bias, V r is the applied reverse bias, φ B is the Schottky barrier height, V ph is the photovoltage, E F is the Fermi level, and E Fn is the quasi-Fermi level for electrons. For the band alignment at p-Si/MoS 2 interface, all the necessary parameters in terms of vacuum level and normal hydrogen electrode level in electrolyte of pH = 0.3 are shown in Figure S1, where the p-Si doping level is~1.6 × 10 15 cm −3 . The injected electron passes from 1T-MoS 2 to 2H-MoS 2 , considering that 1T-MoS 2 exhibits a work function of approximately 4.2 eV, [46] which is not much different from that of the electron affinity of 2H MoS 2 (4.3 eV), and higher electrical conductivity compared with the 2H phase. Bai et al. studied the 1T/2H MoS 2 contact with different contact types and concluded that the edge-contact model exhibits a low tunneling barrier of 0.1 eV and even Ohmic contact in the case of having excess in-plane dangling bonds at the edge of 2H MoS 2 [47]. An important interpretation from the J-V characteristics of the PEC performance is that the conditions for bulk junction properties are fulfilled after 500 ALD cycles and the MoS 2 layer growth is already in the vertical direction. Therefore, in a situation of a high portion of 1T phase with an ∼ 10 23 cm −3 density of Cl ions, within the 2H phase, there exists the 1T/2H MoS 2 phase as an edge-contact type, indicating that this electron transfer path model is valid. Since the non-ideal growth characteristic of our ALD MoS2 layers is well understood in the context of strong anisotropy in its covalency, the effects of active edge sites could be correlated in PEC operations. The initial number of layers forms in a nanoflake-like structure, mostly parallel to the substrate up to several tens of nm, followed by growing in the vertical direction. The layers progressively passivate the surface states of Si and decrease FLPss, which takes a significant portion for electron-hole pair recombination at the Si interfaces while maintaining the electrochemical activation sites for the HERs. The amount of surface recombination reduction leads to an increased Vph [48], as observed in our experiments. The quantitative analysis of the surface energetics can be understood using Equation (1), 22 where is the surface electron concentration at light, is the surface electron concentration at dark equilibrium, and is a constant at the interface, which is the ratio of energy states to the bulk, and Vo is the amount of band bending at the junction interface. From Equation (1), we can deduce that the MoS2 catalyst layer's thickness range is in the nanoscale domain, changes the surface charge concentration (here, electrons) of the Si, and modifies the surface energetics. The induced band bending also affects the photocurrent increment with the MoS2 layer thickness to the certain point, which can be analyzed using the Butler−Volmer relation in Equation (2) Here, is the electron current, is the electron charge transfer coefficient, and is the overpotential. Based on the equation, the induced band bending modification explained above is an influencing factor for the exchange current and for the overpotential, . The current induced within the bulk region is the sum of the current induced in the junction (depletion region) and the current from the diffusion outside the depletion quasi-neutral region, which can be expressed by Equation Since the non-ideal growth characteristic of our ALD MoS 2 layers is well understood in the context of strong anisotropy in its covalency, the effects of active edge sites could be correlated in PEC operations. The initial number of layers forms in a nanoflake-like structure, mostly parallel to the substrate up to several tens of nm, followed by growing in the vertical direction. The layers progressively passivate the surface states of Si and decrease FLP ss , which takes a significant portion for electron-hole pair recombination at the Si interfaces while maintaining the electrochemical activation sites for the HERs. The amount of surface recombination reduction leads to an increased V ph [48], as observed in our experiments. The quantitative analysis of the surface energetics can be understood using Equation (1) [22], where n s is the surface electron concentration at light, n 0 s is the surface electron concentration at dark equilibrium, and γ is a constant at the interface, which is the ratio of energy states to the bulk, and V o is the amount of band bending at the junction interface. From Equation (1), we can deduce that the MoS 2 catalyst layer's thickness range is in the nanoscale domain, changes the surface charge concentration (here, electrons) of the Si, and modifies the surface energetics. The induced band bending also affects the photocurrent increment with the MoS 2 layer thickness to the certain point, which can be analyzed using the Butler−Volmer relation in Equation (2) [49].
Here, i n is the electron current, α n is the electron charge transfer coefficient, and η is the overpotential. Based on the equation, the induced band bending modification explained above is an influencing factor for the exchange current and for the overpotential, η. The current induced within the bulk region is the sum of the current induced in the junction (depletion region) and the current from the diffusion outside the depletion quasi-neutral region, which can be expressed by Equation (3) [50], where i ph is the photocurrent density, q is the electronic charge, I 0 is the monochromatic photon flux incident on the semiconductor, α is the light absorption coefficient of silicon, W is the depletion region, and L is the bulk diffusion length of Si. Since the depletion region variation was confirmed from the Mott-Schottky analysis in Figure 3d, we can conclude that for a certain MoS 2 thickness, the solid junction characteristics between the Si and the MoS 2 thin film play a role in enhancing the photocurrent. As a result, the spatially genuine 3D architectures (i.e., laterally homo and vertically heterojunction) with mixed-phases of 1T and 2H of MoS 2 mainly offer synergetic functions in PEC HERs. The ALD deposited MoS 2 layers gradually alleviate the FLP ss effect, and at the same time forming a bulk heterojunction interface. Considering the semiconductor p-n heterojunction utilized in PEC process, there are two types of V ph values. One is at the p-n junction interfaces, and the second is on the surfaces, which are MoS 2 /electrolyte interfaces [36]. It is confirmed in our previous study that our MoS 2 is insensitive to light (i.e., high doping concentrations). It is concluded then that the driving potential for the generated electron-hole pair separation and p-n junction interface-dominated V ph . Now, considering the p-n heterojunction part in the p-Si/MoS 2 , the built-in potential can be expressed by conventional solid-state physics in Equation (4) [51], where V bi is the built-in potential, T is the temperature (294 K), q is the charge of an electron (1.6 × 10 −19 C), k is Boltzmann's constant (1.38 ×10 −23 JK −1 ), N a is the acceptor concentration of p-Si, N d is the donor concentration of MoS 2 , χ is the electron negativity for each type of semiconductor, E G for the band gap, and N CN and N V p are the density of states of the conduction band and valence band, respectively. The values of MoS 2 except for the doping density were referred from the literatures [52][53][54][55].
The theoretical built-in voltage is 0.6 V. For the case of a Schottky barrier, it can be expressed as in Equation (5), where φ B is the Schottky barrier height, φ m is the work function of 1T-MoS 2 . The theoretical barrier height is therefore 0.87 to 0.97 V. The open circuit potential (OCP) difference between dark and light directly indicates the V ph observed, the amount of the band bending of the junction interfaces ( Figure 6). Upon illumination, the amount of the Fermi level shift (V m ) is 0.8 V by observing a very sharp increase in the OCP as shown Figure 6a. This is distinctive in that typical PEC cells exhibit gradual increments when light turns on. The subsequent saturation is attributed to the recombination at the electrode/electrolyte interface, which results in 0.67 V (V ss ). Considering the physical dimension of metallic 1T MoS 2 of approximately 10 nm, which is much smaller than the depletion width, environmental Fermi level pinning (FLP nS ) is expected by nano-Schottky junctions of p-Si/1T MoS 2 although the surface states (FLP ss ) were passivated. Moreover, FLP nS competes with both FLDP ox and FLDP ox followed by pinch-off, enabling effective tuning of the Schottky barrier, which results in the final value of the effective barrier height of 0.8 V (V m ). By observing a sharp increase in OCP, indeed, fast transient behavior occurred when light turns on, implying that FLP is negligible in the present electrochemical reactions. The OCP decay after turning off the light provided information on the generated electron-hole pair recombination at the electrolyte interface and the carrier lifetime as shown in Figure 6b,c. The hole lifetime from the voltage decay is given by Equation (6) [56]. Figure 6c shows the hole lifetime determined from the results. The long lifetime of 60 s for the photocathode confirms that the p-Si/SiOx/MoS2 structure effectively prevented surface recombination with the electrolyte. The fast charge transfer kinetics could be further understood by combining the EIS measurements of our p-Si/SiOx/MoS2 photocathode shown in Figure 6d. The charge transfer characteristics can be analyzed by means of an equivalent circuit model ( Figure S2b), and the circuit parameters are derived from fitting Nyquist plots. In Figure 6d, EIS measurements on 500-ALD-cycle photocathodes under dark and illumination are presented. The overall charge transfer resistance was dramatically reduced under the illumination condition (red inserted) compared to the dark equilibrium condition. Under illumination, for 500-ALD-cycle photocathode, the charge transfer resistance (RMoS2) of 22.38 Ω·cm 2 at the MoS2/electrolyte interface, and the p-Si/MoS2 junction resistance (Rjc) of 27.38 Ω·cm 2 were obtained, confirming the facile kinetics of the photocathode and are catalytically active. The Nyquist plot shows the two distinctive semicircles, which represent the frequency-dependent resistancecapacitance (RC) characteristics. The equivalent circuit, which is depicted in Supporting Figure S2b consists of constant phase elements coupled with the charge transfer resistance. Measured parameters are summarized in Table S1. R refers to the charge transfer resistance at the p-Si/MoS2 junction, R is the charge transfer resistance between the MoS2 and the electrolyte, respectively. Note that under illumination, the R and R significantly decreased more than an order of magnitude compared with that of the dark equilibrium condition, which depicted the induced voltage drop across the. R , charge transfer resistance can be derived from the impedance scan of the low-frequency range, the charge transfer resistance at junction Rjc, and bulk resistance Rbulk from Figure 6c shows the hole lifetime determined from the results. The long lifetime of 60 s for the photocathode confirms that the p-Si/SiO x /MoS 2 structure effectively prevented surface recombination with the electrolyte. The fast charge transfer kinetics could be further understood by combining the EIS measurements of our p-Si/SiO x /MoS 2 photocathode shown in Figure 6d. The charge transfer characteristics can be analyzed by means of an equivalent circuit model ( Figure S2b), and the circuit parameters are derived from fitting Nyquist plots. In Figure 6d, EIS measurements on 500-ALD-cycle photocathodes under dark and illumination are presented. The overall charge transfer resistance was dramatically reduced under the illumination condition (red inserted) compared to the dark equilibrium condition. Under illumination, for 500-ALD-cycle photocathode, the charge transfer resistance (R MoS2 ) of 22.38 Ω·cm 2 at the MoS 2 /electrolyte interface, and the p-Si/MoS 2 junction resistance (R jc ) of 27.38 Ω·cm 2 were obtained, confirming the facile kinetics of the photocathode and are catalytically active. The Nyquist plot shows the two distinctive semicircles, which represent the frequency-dependent resistance-capacitance (RC) characteristics. The equivalent circuit, which is depicted in Supporting Figure S2b consists of constant phase elements coupled with the charge transfer resistance. Measured parameters are summarized in Table S1. R jc refers to the charge transfer resistance at the p-Si/MoS 2 junction, R MoS2 is the charge transfer resistance between the MoS 2 and the electrolyte, respectively. Note that under illumination, the R jc and R MoS2 significantly decreased more than an order of magnitude compared with that of the dark equilibrium condition, which depicted the induced voltage drop across the. R MoS2 , charge transfer resistance can be derived from the impedance scan of the low-frequency range, the charge transfer resistance at junction R jc , and bulk resistance R bulk from the mid-frequency range to high frequency. Q jc is the junction space charge region capacitance and Q MoS2 is the MoS 2 /electrolyte interface capacitance. The MoS 2 /electrolyte resistance was the lowest for the 300 ALD cycle photocathode, which is due to the vertical layers and exposure of the edge sites. The resistance values for 500 ALD cycles photocathode are somewhat higher than those of the 300 ALD cycle photocathode. The second semicircle, which corresponds to the capacitance-resistance (RC) through the p-i-n junction, indicates that the illumination increases the junction voltage.
Finally, we systematically propose the degree of band bending qualitatively when employing mixed-phase metal chalcogenides on Si for application in PEC water splitting. First of all, the issues of FLP ss could be cleared by ALD passivation. The other would be Fermi-level pinning (FLP nS ) by nanoscale Schottky junctions (p-Si/1T MoS 2 ) [57]. When it comes to the metallic 1T MoS 2 contacted with Si, which forms the increased Schottky barrier height by environmental Fermi-level pinning. Moreover, it has trade-off effects by Fermi-level depinning (FLDP ox ) by the presence of oxide interfaces, i.e., SiO x [58]. We suspect that these mechanisms cancel out each other in V ph enhancements. Because of the discontinuous nature of the deposited layers (based on metallic 1T MoS 2 ), electrical charges are screened only partially, inducing discontinuous junctions. This is also responsible for V ph increase although the junction properties are not the same as the bulk counterpart. Such a conclusion is plausible as studies on the thickness dependence of catalyst layers on the junction characteristics, insisting that the catalyst layer thinner than the Debye length only partially screen charges with nonideal characteristics. Note that in such PEC systems, the currents flow not only through the nanoscale Schottky (metal-semiconductor) junctions but also through semiconductor-electrolyte interfaces. This differs from conventional solid-state devices and was systematically studied by Rossi and Lewis [59]. Therefore, pinch-off effects by Fermi-level depinning (FLDP inhm ) operates from inhomogeneous Schottky junctions at the nanoscale.
Atomic Layer Deposition
Atomic layer deposition of the MoS 2 films. The MoS 2 film was grown on a p-type silicon wafer (Boron doped, 1-30 Ωcm, 525 ± 25 µm thickness) using a custom-designed ALD system. The silicon wafer was cleaned with piranha solution, a 3:1 mixture of concentrated sulfuric acid (H 2 SO 4 ) with hydrogen peroxide (H 2 O 2 ), for 20 min to remove any organic residues on the surface, followed by immersion in buffered oxide etch or BOE (HF:NH 4 F 7:1) to remove the native oxides. The chamber was heated and stabilized for 30 min before reactants were introduced. The MoS 2 thin film was deposited at a temperature range of 250-300 • C using MoCl 5 (99.6%, Strem Chemicals, Inc., Newburyport, MA, USA) and H 2 S (3.99%, balanced N 2 , JC Gas, Daegu, Korea) and a carrier gas of Ar (5N, JC Gas, Daegu, Korea). The pulse time was controlled to be 0.2 s followed by 15 s of purging with Ar at a gas flow of 50 sccm.
Electrochemical Characterization
All electrochemical measurements were performed using a three-electrode system with Ag/AgCl as the reference electrode, a Pt wire (diameter of 0.5 mm) as the counter electrode and MoS 2 on a silicon substrate as the working electrode. Cyclic voltammetry and linear sweep voltamperometry were performed using a VMP3 Potentiostat from Bio-Logic at a scan rate of 50 mV/s in 0.5 M of H 2 SO 4 (pH 0.3) as the electrolyte. For illuminated OCP measurements, 5-10 samples from different batches for each cycle were taken and a solar simulator (Oriel Sol 301A, Newport) with a Keithely 2400 source meter was used under AM 1.5 illumination (100 mW cm −2 ). Electrochemical impedance spectra were measured over a frequency range of 10 6 to 0.1 Hz at 0 vs. the reversible hydrogen electrode (RHE). The Mott-Schottky impedance was performed at 300 kHz to 1 kHz at a bias of −0.6 V to 0 V vs. RHE at 1 sun illumination. The electrode area was 0.785 cm 2 . To make an ohmic contact, the back side of the silicon was scratched with a diamond cutter, followed by a Gallium-Indium eutectic alloy and stuck to the copper electrode using silver paste epoxy. Magnetic stirring at 500 rpm was performed during the experiment.
Conclusions
Our chlorine-rich, ALD-grown MoS 2 was deposited with different cycles on lightly doped p-Si. Neither a high-temperature sulfurization process nor any interface engineering was necessary due to the mixed-phase characteristics of the MoS 2 thin film. Thickness-dependent PEC characteristics were theoretically demonstrated along with experimental results. Our photocathodes showed the anodic shift of 0.47 V compared to bare p-Si with a saturation J ph of 30 mA·cm −2 and excellent stability (75 h under 1 sun illumination). ALD-grown MoS 2 film can induce energy band bending, resulting in a V ph enhancement, which coincides with a bulk p-i-n junction model. The energetically favorable charge transfer mechanism is introduced for the embedded metallic 1T phase MoS 2 .
Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4344/8/12/580/s1, Figure S1: Schematic energy level diagram of the band alignment at p-Si/MoS 2 interface in terms of vacuum level and normal hydrogen electrode level in electrolyte of pH = 0.3. The p-Si doping level is~1.6 × 10 15 cm −3 , Figure S2: (a) Nyquist plot of 700-ALD-cycles and 300-ALD-cycles on p-Si photocathode under 1sun illumination at 0 bias. (b) Equivalent circuit corresponding to the EIS measurement. R bulk is bulk resistance of the silicon, Q jc is constant phase element (CPE) of p-Si/MoS 2 junction along with junction resistance R jc , CPE of MoS 2 /electrolyte interface is denoted as QMoS 2 and the resistance of R Mos2 , Table S1: Measured charge transfer resistances. | 9,272.2 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
Simulation of Turbulent Wake at Mixing of Two Confined Horizontal Flows
The development of a turbulent mixing layer at mixing of two horizontal water streams with slightly different densities is studied by the means of numerical simulation. The mixing of such flows can be modelled as the flow of two components, where the concentration of one component in the mixing region is described as a passive scalar. The velocity field remains common over the entire computational domain, where the density and viscosity difference due to the concentration mainly affects the turbulent fluctuations in the mixing region. The numerical simulations are performed with the open source code OpenFOAM using two different approaches for turbulence modelling, Reynolds Averaged Navier Stokes equations (RANS) and Large Eddy Simulation (LES). The simulation results are discussed and compared with the benchmark experiment obtained within the frame of OECD/NEA benchmark test. A good agreement with experimental results is obtained in the case of the single liquid experiment. A high discrepancy between the simulated and the experimental velocity fluctuations in the case of mixing of the flows with the slightly different densities and viscosities triggered a systematic investigation of the modelling approaches that helped us to find out and interpret the main reasons for the disagreement.
Introduction
Interaction of liquid streams with different densities and temperatures is of particular interest in nuclear industry.Notable examples are turbulent mixing in junctions of the primary coolant piping or boron mixing in the reactor core of the pressurized water reactor.The turbulent wake mixing zone, in which two streams interact, depends on the physical properties of the two liquids involved.Computational Fluid Dynamics (CFD) simulations are commonly used to predict the turbulent phenomena and may successfully complement the experimental data.The thoroughly validated CFD models are particularly useful for the investigation of key turbulent mechanisms.Similar to the experiments, also the numerical simulations can be affected by uncertainties, in large part due to the increasing complexity of the physical models that are very sensitive to the uncertain boundary conditions [1].The CFD models need to be validated on the detailed experimental data, but it can be also the other way around; the detailed numerical simulations can be of great support to verify the regularity of experimental methods in order to avoid systematic experimental errors [2].
The focus of this paper is the simulation of the turbulent mixing wake at mixing of two water streams with a small difference in densities.The experimental data of the GEMIX (Generic Mixing Experiment) experimental test facility located at the Paul Scherrer Institute (PSI) are used to validate the accuracy of the numerical simulations.The measured data are obtained in the frame of participation within the OECD/NEA benchmark exercise [3].The numerical simulations are performed with the open source code OpenFOAM [4] using two different approaches for turbulence modelling, the Reynolds Averaged Navier Stokes equations (RANS) and the Large Eddy Simulation (LES).
Mixing Experiment
The confined wake flow water mixing experiments have been carried out at the GEMIX facility [3].The Y-shaped squared Figure 1: The geometry of the GEMIX computational domain (a) and the measuring positions (b).The turbulent mixing zone is illustrated by green colour [3].
flow channel (on Figure 1(a)) is used for the experiments.The inlet section consists of the upper and the lower leg.The upper and the lower streams have the same volumetric flow rate and the same shape of the inlet velocity profile.The two inlet legs, separated by 3 ∘ splitter plate, lead to the mixing section of the channel.The mixing section with the square cross-sectional area of 50 mm by 50 mm is 600 mm long (see Figure 1).
Two experimental cases are considered that differ in the properties of the flow entering into the upper and the lower leg.Hereinafter they are named as the "single liquid" and the "two-liquid case."In the "single liquid" case, a tap water at 20 ∘ C is used for both inlet legs.In the "two-liquid" case, the tap water at 20 ∘ C is flowing through the upper inlet leg and the deionized water at 22.5 ∘ C with added sucrose is flowing through the lower leg.The mass fraction of the sucrose in conjunction with a temperature adjustment is used to increase the density of the water flowing through the lower leg by 1% while the kinematic viscosity of both streams differs by approximately 2%.The physical properties of the liquids used in the experimental cases are listed in Table 1.In the mixing section a particle image velocimetry (PIV) and a laser induced fluorescence (LIF) are applied to measure the mixing process.One wire mesh sensor at a time is installed to measure the cross-sectional concentration profile.The five measuring planes located 50, 150, 250, 350, and [3].
During the experiment the velocity values in and direction were measured and the RMS velocity fluctuations were calculated.The experimentalists assumed that the velocity fluctuations are the same in the and the direction and consequently they did not measure the velocity fluctuations in the direction.The turbulence kinetic energy is calculated from the velocity fluctuations measured in the experiment as where and V are the velocity fluctuations in the and the direction, respectively.
Simulation Model
Numerical simulations are performed with the OpenFOAM CFD code, version 1606+ [4].The TwoLiquidMixingFoam solver is designed for modelling two miscible incompressible fluids mixing and is used to simulate both experimental cases.This solver solves one momentum equation, equation for turbulence transport, and one additional equation for the transport of the passive scalar (concentration ): where ⇀ is the velocity vector, AB is the molecular diffusion coefficient, ] is the turbulent (eddy) viscosity, and Sc is the turbulent Schmidt number.From the solved concentration field, a corrected density field is obtained as ( The new density is then used in momentum and turbulence transport equations.The results include single velocity, pressure, and concentration fields.
The two experimental cases described in Section 2 are being simulated.In further text they are described as the "Single liquid" and the "Two-liquid" case.The diffusion coefficient AB and the turbulent Schmidt number Sc were set to 2.0 × 10 −5 m 2 /s and 1.25, respectively, for both cases.
Two different approaches are used to model the turbulence.In the first approach, the Reynolds Averaged Navier Stokes (RANS) equations are used to resolve the flow [5].Within the RANS approach the entire spectrum of turbulent fluctuations is modelled using the - SST (Shear Stress Turbulence) model with the default coefficients and the scalable wall functions [6].The second approach, the Large Eddy Simulation (LES), applies space filtering on transient momentum equations and uses the subgrid scale Wall Adapting Local Eddy (SGS WALE) viscosity model for proper scaling in the near-wall region [7].The mesh for the considered LES calculation is sufficiently dense that the later contribution is small enough and can be neglected.The turbulence kinetic energy for the LES calculation is thus calculated from the resolved velocity calculations in the same way as in the experiment (1).
Computational Mesh and Boundary Conditions.
Computational domain for the RANS calculations includes 50 mm long inlet legs before the junction and 450 mm long mixing section.The origin of the coordinate system in the domain is located at the tip of the splitter plate (see Figure 1(a)).The coordinates , , and represent the streamwise, the crosswise, and the spanwise direction, respectively.The mesh used for the RANS calculations is shown in Figure 2. The mesh cells are uniformly distributed in the streamwise direction, while they are refined in the wall-normal direction near the channel walls and in the middle of the mixing section (crosswise and spanwise) using a linear Stretching Ratio (SR).The convergence of the results during mesh refinement is studied on four different meshes, listed in Table 2.The finest mesh A and the coarsest mesh D consisted of 1,500,000 and 60,350 elements, respectively.The wall-resolved RANS using a second order method usually requires + ≤ 1 for the mesh near the wall.This requirement is fulfilled for the meshes A and B (see Table 2) and shows up when comparing the turbulence kinetic energy profiles just after the splitter plate (at = 0.05 m) in Figure 3.The mesh B shows no major discrepancies comparing to the results with the finest mesh A; hence it is used for all further RANS simulations in this study.
The computational domain used for the LES calculations includes the 100 mm long inlet legs before the junction and the 450 mm long mixing section.The subgrid scale model in the LES simulation requires denser mesh with the mesh cells having much lower aspect ratio (AR) than in the case of RANS.The mesh parameters for the LES are listed in Table 3.The mesh cells are uniformly distributed in the streamwise direction and refined in the wall-normal direction near the channel walls and in the mixing section (crosswise and spanwise) to satisfy the condition + ≈ 1 at the walls.The mesh parameters are comparable to the ones used in the wallresolved LES simulations of the channel flow [2].Velocity and turbulence kinetic energy boundary conditions at the inlet for the RANS calculations were extracted from the experimental profiles provided within the benchmark with the average inlet velocity value of 0.6 m/s.As already mentioned, the LES domain uses longer inlet legs in order to achieve a fully developed flow at the inlet.The mapped boundary condition implemented in the Open-FOAM uses the recycling method to generate the inflow turbulence from the initial disturbances in the velocity field.For both types of calculations the no-slip boundary condition is applied on the walls and the Neumann boundary condition (gradient of the variable is equal to zero) is used at the outlet.
Using a Flow Though Time (FTT) measure, which is the ratio of the streamwise length of the domain to the
Results and Discussion
4.1.On the Inlet Flow Conditions.In the case of RANS the experimental profiles for the inlet velocity and the turbulence kinetic energy are directly imposed at the inlet boundaries of both inlet legs.This is however not possible in the case of LES approach.Instead, the fully developed flow at both inlet legs can be reproduced by using the recycling method for generation of the inflow turbulence.After some appropriate length () from the inlet, the velocity field was recycled back to the inlet.The recycling distance must be long enough, so that the large flow structures are not affected.In our case the used recycling length was 5 cm.The adequate length in the streamwise direction was verified by calculating the time averaged autocorrelation function of the instantaneous velocity components along the central line of the inlet leg (Figure 4) using One of the key parameters for the wall-bounded flows is the mean friction velocity ( ), which can be calculated from the simulation results using the local wall-shear stress (nearwall velocity gradient), and the friction Reynolds number (Re ): where ℎ is the hydraulic diameter of the inlet leg that amounts to 0.033 m.The mean friction velocity calculated from the results is = 0.0348 m/s and the friction Reynolds number is Re = 1100.The bulk Reynolds number based on the mean streamwise velocity and ℎ is Re = 20,000.
The velocity profiles and the fluctuations in the channel flows are usually compared with the results from the similar studies in the nondimensional form using the normalization with the friction velocity : where + and + represent the nondimensional wall distance and the nondimensional mean velocity, respectively.Mikuž and Tiselj [2] used LES approach to study similar channel flow with friction Reynolds number Re = 1020.The obtained friction velocity was = 0.04923 m/s.Zhang et al. [8] used DNS approach, to study the flow in a duct of square cross section with friction Reynolds number Re = 900.Figure 5 shows the comparison of our LES streamwise velocity profiles (located at the inlet leg 5 cm before the splitter plate), with the LES velocity profiles obtained by Mikuž and Tiselj [2] and DNS results of Zhang et al. [8].The profiles are similar near the wall; some smaller discrepancies can be observed at + above 100, where the velocity profiles in duct are slightly steeper.
Figure 6 compares our LES simulations of the three normalized components of the inlet velocity fluctuations with the LES velocity fluctuations obtained in [2] and with DNS fluctuations by Zhang et al. [8].As discussed in [9], the resolved velocity fluctuations obtained by the LES should be smaller than those calculated by the DNS or the measured ones.This is shown also in Figure 6, at least for the fluctuations in the crosswise and the spanwise direction.The streamwise fluctuations for the both LES calculations are however somewhat higher than for the DNS results.In general, the discrepancies between the 3 cases in Figure 6 are relatively small.These results show that the LES inlet flow conditions are reasonably well predicted and approximately correspond to the fully developed inlet flow.
The comparison of the inlet flow conditions between the experiment and those obtained by the RANS and the LES simulations is shown in Figure 7.It can be seen that the streamwise velocity profile in the experiment is more flat (due to the installed flow straightener grids) as in the case of the LES simulation.The turbulence kinetic energy near the wall is much higher in the case of LES.It should be noted that the experimental inlet boundary conditions cannot be simply reproduced in the case of LES simulations, where the fully developed flow conditions were used instead.As shown in Figure 7, the experimental profiles of inlet velocity and can be directly imposed as inlet boundary in the case of RANS.
Single Liquid
Case: Mixing Region.Figure 8 shows the streamwise velocity profiles in the mixing region of the channel at different measuring positions.It can be seen that the streamwise velocity profiles are relatively well predicted by the RANS and the LES approach.
The turbulence kinetic energy profiles in the mixing section are shown on Figure 9.The profiles seem to be well predicted by both simulations at all measuring positions except for the last one, located 0.45 m away from the splitter plate.At locations = 0.05 m and = 0.015 m the simulated profile predicts a slight local drop near = 0, which is not observed in the experiment.This seems to be the consequence of the eddy viscosity modelling approach that predicts lower at locally lower velocity gradients.If we compare the streamwise velocity profiles on Figure 8 and profiles on Figure 9, it can be seen that, in the narrow region around = 0, the velocity profile is relatively flat with a small local velocity gradient that leads to the local minimum in the profile.Similar results, with the deficit in the profiles, were observed also in the measured data of the previous benchmark experiment performed at GEMIX facility [10].The RANS results show somewhat better matching with the experimental results, whereas the comparison of the LES results shows some discrepancy, especially in the region near the walls.This seems to be the consequence of different inlet profiles of the velocity and in the case of LES calculation.The higher from the inlet propagates into the mixing zone and with the increasing distance from the splitter plate slowly approaches to the experimental values.
For the LES case the most credible is the comparison of the directly calculated velocity fluctuations between the simulations and the experimental results.Figure 10 shows the comparison of the streamwise and the crosswise velocity fluctuations in the mixing zone at several locations along the channel.Crosswise components of the velocity fluctuations are reasonably well predicted by the simulations, except in the last measuring position.On the other hand, streamwise components are much better predicted away from the splitter plate, while significant overprediction just after the flows merging can be observed, especially in the proximity of the walls.This further shows that the inlet flow condition can be the main reason for discrepancy.the experimentalists did not measure the inlet conditions for the mixture of water and sucrose assuming that the velocity profiles and fluctuations should be also similar.
The comparison of the measured and the calculated results (RANS and LES) between the single liquid and the two-liquid case at the location just behind the splitter plate are shown in Figure 11.The calculated profiles of the velocity and the turbulence kinetic energy are indeed rather similar for both cases.The experimental velocity profiles also match for both cases, but a huge difference can be observed for the experimental turbulence kinetic energy profiles in the mixing region, where the turbulence kinetic energy for the two-liquid case is more than 10 times higher than the one from the single liquid case.
In Figures 12, 13, 14, and 15, the calculated profiles of the two-liquid case are compared with the experiments at different locations along the mixing region.The calculated velocity profiles in the mixing region for the two-liquid case match the profiles from the experiment as can be seen on Figure 12. Figure 13 shows the comparison of concentration profiles, where the simulated profiles are somewhat wider than the experimental ones.However, the calculated turbulence kinetic energy profiles strongly underpredict the experimental data, as shown on Figure 14.The underpredicted turbulence kinetic energy is the main reason why we continued to use the LES approach also in the two-liquid case.Namely, the LES approach resolves the velocity fluctuations and offers better insight into the behaviour.
The measured and the calculated velocity fluctuations for the two-liquid case are shown in Figure 15.The measured velocity fluctuations in the centre of the channel are approximately the same in both directions, as are the velocity fluctuations calculated with the LES.The fluctuations are well predicted for the single liquid case (Figure 10) but are greatly underpredicted for the two-liquid case (Figure 15).Despite the small difference in the density and in the kinematic viscosity in the two-liquid case, only 1% and 2%, respectively, the measured turbulence kinetic energy is by the order of magnitude higher than for the single liquid case.Several possible reasons for such turbulence increase are addressed and discussed in the sensitivity analysis section.
Sensitivity Analyses.
Trying to understand the discrepancies, the sensitivity calculations with different diffusion coefficient and turbulent Schmidt number values were also performed.These two parameters influence only the concentration profile, while the other flow quantities remain unaffected.Figure 16 shows the influence of the turbulent Schmidt number on the shape of the mixing layer.It can be observed that the smaller the turbulent Schmidt number, the wider the mixing zone.
Despite the different inlet velocity profile shapes (uniform, fully developed) used in the calculations, the velocity fluctuations do not increase.To investigate the possible influence of the turbulence closure model, alternative turbulent models (standard - and Launder-Reece-Rodi -LRR) were also tested within the RANS approach, but no improvements were obtained, as can be seen in Figure 17.The same conclusion can be drawn when different solver methods within the OpenFOAM code were tested.The turbulence kinetic energy was still by the order of magnitude lower than in the experiment.
Buoyancy Effect on the Production of the Turbulence
Kinetic Energy.The additional turbulence kinetic energy production term due to the buoyancy [11] can be written as where , , , and ⇀ are eddy viscosity, numerical constant, density, and gravitational acceleration, respectively.The implementation of the buoyancy production term in the turbulent equation within the OpenFOAM is not possible without making significant changes to the source code.Therefore, the buoyancy effect was estimated by superposition of the buoyancy source term on the values.Based on the current simulation results, the increase of due to the buoyancy was calculated from (7).The increase of in the mixing zone (around = 0.05 m) amounts to 18% at most, which is much too small to explain the underprediction of the measured values.It should be noted that here the buoyancy effect is simply superimposed on the values calculated without this term.In the real simulation the feedback effect on the momentum equation appears.Namely, the increased turbulence kinetic energy due to the buoyancy would increase the turbulent viscosity, which would dampen the velocity gradients in the momentum equation.The lower velocity gradients would in the next step enter into the turbulent equations resulting in the lower turbulent kinetic energy values.Hence, the estimated increase of due to the buoyancy can be assumed as the highest possible.
Refractive Index
Effect in the Experiment.The above sensitivity analyses offer no plausible explanation that the high disagreement between the simulated and measured profiles in the mixing region may arise from the modelling effects.Another possible source of error may arise also from the measured data, which is briefly discussed here.
When the two miscible liquids are mixed high concentration gradients may occur.According to Heidcamp [11] the refractive index of the two liquids involved in this benchmark differs by 4 × 10 −3 .When the two liquids with the different refractive indexes are mixed, the refractive index gradient appears, which affects the light propagation through the liquid.Concentration and the refractive index fluctuations may cause fluctuations in the dissipated light propagation and this could have the effect on the overestimated values of the velocity fluctuations.
Particle image velocimetry camera sees the particle at one position, which is corrupted due to the refractive index gradient.On the next measurement, the camera sees the same particle at different position, which is also corrupted but in a different direction.In the reality, the particle in the time interval between the measurements travelled a very short distance, but because of the light diffraction, the camera could record greater or smaller distance and consequently measure the higher velocity fluctuation.Based on the comparison of simulation results with experimental data, the problem of refractive index effect in the two-liquid mixing case has Science and Technology of Nuclear Installations been already exposed in [12].This was later confirmed by experimentalists [13] who observed that the distortions in the measurements grow with the distance between the measuring position and the camera lens.At the measurement of the velocity components this effect is not significant due to the averaging process.However, in the case of the measured velocity fluctuations this effect may lead to a significant experimental error, which cannot be easily estimated at the current state of the knowledge [13].
Conclusions
A numerical simulation of the evolution of the turbulent mixing layer in the horizontal confined two-component flow was performed with the OpenFOAM code, version v1606+.
The simulated velocity and the turbulence kinetic energy in the single liquid case obtained with the RANS and the LES simulations match the experimental results relatively well.The RANS simulations provided somewhat better agreement with the single liquid experiment due to the possibility of imposing the experimental inflow condition.Nevertheless, the LES approach has proved to be a very useful method when a better insight into the development of primary velocity fluctuations is required.
In the two-liquid case only the calculated velocity and the concentration profiles satisfactorily agree with the experimental data.On the contrary, the simulated velocity fluctuations and consequently the turbulence kinetic energy are by order of magnitude lower than experimental values, for both, RANS and LES simulations.The comparison of the single liquid and the two-liquid simulation results shows very similar velocity and velocity fluctuation profiles, since the density and the viscosity differences between the two mixing flows in the two-liquid case are very small, only 1% and 2%, respectively.Despite the small differences in the density and in the viscosity, the velocity fluctuations measured in the two-liquid experiment are by order of magnitude higher than in the single liquid experiment.A sensitivity analysis of modelling approaches was carried out in order to explain the deviation in the fluctuations.The simulation results indicated that the possible reason for such high disagreement may be attributed to the systematic experimental error.It was found that the effect of refractive index difference may lead to the experimental error in the measured fluctuations for the twoliquid case.
This study shows the tight relationship between the experimental and simulation results.The simulation models
Figure 2 :Figure 3 :
Figure 2: Computational domain with the mesh B used for the RANS calculations.
Figure 4 :
Figure 4: The time averaged autocorrelation function of the instantaneous velocity components taken along the central line of the inlet leg.
Figure 5 :
Figure 5: The streamwise velocity profiles in nondimensional units.
Figure 7 :
Figure 7: Streamwise velocity profile (a) and turbulence kinetic energy (b) profiles on the inlet.
4. 3 .Figure 8 :Figure 9 :
Figure 8: The streamwise velocity profiles at different locations for the single liquid case.
Figure 10 :
Figure 10: Velocity fluctuations profiles at different locations for the single liquid case.
Figure 11 :
Figure 11: Comparison of the velocity profiles (a) and turbulence kinetic energy (b) in the single liquid and two-liquid case at = 0.05 m.
Figure 12 :Figure 13 :
Figure 12: Streamwise velocity profiles at different locations for the two-liquid case.
Figure 14 :Figure 15 :Figure 16 :
Figure 14: Turbulence kinetic energy profiles at different locations for the two-liquid case.
Table 2 :
Mesh parameters (total number of mesh nodes, number of nodes in streamwise, crosswise and spanwise direction, aspect ratio, and calculated maximal nondimensional wall distance) used for the RANS calculations.
Table 3 :
Mesh parameters used for LES calculations. × × | 5,822.6 | 2018-02-19T00:00:00.000 | [
"Physics"
] |
Multiresolution MBMS transmissions for MIMO UTRA LTE systems
Hierarchical constellations constitute a simple technique for achieving multiresolution and, therefore, are appealing for MBMS (Multimedia Broadcast and Multicast Service). In this paper we consider the use of M-QAM hierarchical constellations (Quadrature Amplitude Modulation) combined with MIMO (Multiple Input Multiple Output) for the transmission of multicast and broadcast services in UTRA (Universal Mobile Telecommunications System Terrestrial Radio Access) Long Term Evolution (LTE) systems based on Orthogonal Frequency Division Multiplexing (OFDM). Due to the demanding channel estimation requirements and the high sensitivity to interference resulting from the usage of several antennas and hierarchical constellations, an enhanced receiver based on the turbo concept is employed and its performance is evaluated.
INTRODUCTION
It is widely recognized that OFDM modulations [1] are suitable for broadband wireless systems.For this reason they were selected for several digital broadcast systems and wireless networks [2] and are also being considered for UTRA LTE [3].Regarding UTRA LTE, special attention is being devoted to the support of MBMS which has already been standardized in 3GPP UTRAN (UMTS Terrestrial Radio Access Network) Release-6 [4] and 7 [5].The goal is to enable an efficient support of downlink streaming (from the base station to the mobile terminal) and download-and-play type services to large groups of users.From the radio perspective, MBMS includes point-to-point (PtP) and point-to-multipoint (PtM) modes.Regarding the PtM mode it seems attractive to employ hierarchical modulations since it is a simple and flexible enhancement technique that can increase the transmission efficiency, due to its ability to provide unequal error protection to different bits and thus provide multiresolution into a cell.By having several classes of bits with different error protection associated and to which different streams of information are mapped, a given user can attempt to demodulate only the more protected bits or also the bits that carry the additional information, depending on the propagation conditions.This type of approach is possible whenever the information can be scalable like the cases of coded voice or video signals, as studied in [6] [7].For this reason hierarchical 16-QAM and 64-QAM constellations have already been incorporated into DVB-T (Digital Video Broadcasting -Terrestrial) standards [8].
MIMO schemes have emerged as one of the most promising methods for capacity increase in a communication system [9][10] and are being considered for UTRA LTE [3].In MIMO systems with coherent detection the channel estimation plays a crucial role since the performance of the spatial signal processing in the receiver depends on the accuracy of the channel estimates.Furthermore QAM constellations can be severely affected due to inaccurate channel estimates.
In this paper we consider the use of QAM hierarchical constellations in a UTRA LTE OFDM based system employing multiple transmitting and receiving antennas with the aim of supporting broadcast and multicast services.To deal with the high sensitivity to channel estimation errors we employ an iterative receiver capable of performing joint MIMO detection and channel estimation.This receiver is based on the approach proposed in [11] for WCDMA systems.It can apply different MIMO equalization techniques during the iterative process and obtain refined channel estimates by considering the data symbols as extra pilots, as proposed in [12].
The paper is organized as follows.First Section II introduces hierarchical constellations and defines the model of the MIMO-OFDM system considered in this study.In Section III the proposed iterative receiver structure and respective channel estimation process are described.Section IV presents some performance results obtained with the proposed scheme while the conclusions are given on Section V.
A. M-QAM Hierarchical Signal Constellations
In hierarchical constellations there are two or more classes of bits with different error protection and to which different streams of information can be mapped.By using non uniformly spaced signal points (where the distances along the I or Q axis between adjacent symbols are different) it is possible to modify the different error protection levels.As an example, a nonuniform 16-QAM constellation can be constructed from a main QPSK constellation where each symbol is in fact another QPSK constellation, as shown in Figure 1.The basic idea is that the constellation can be viewed as a 16-QAM constellation if the channel conditions are good enough or as a QPSK constellation otherwise.In the latter situation, the received bit rate is reduced to half.These constellations can be characterized by the parameter k 1 =D 1 /D 2 (0<k 1 ≤0.5), as shown in Figure 1.If k 1 =0.5, the resulting constellation corresponds to a uniform 16-QAM.This approach can be naturally extended to any QAM constellation size M where the number of possible classes of bits with different error protection is
B. Transmitted Signals
In Figure 2 we show a transmitter chain that incorporates QAM hierarchical constellations into a UTRA LTE based MIMO-OFDM transmission.In the proposed scheme, there are 2 1/ 2 log M ⋅ parallel chains for the different input bit streams that will have unequal error protection.For 16-QAM we can use two parallel chains while for 64-QAM we can use three chains.Each stream is encoded, interleaved and mapped into the constellation symbols in the modulation mappers according to the importance attributed to the chain.Pilot symbols are inserted into the modulated data sequence which is then converted to the time domain using an IDFT (Inverse Discrete Fourier Transform).The resulting stream is then split into several smaller streams which are transmitted simultaneously by M tx transmitting antennas.Note that in the proposed scheme the coding is not performed independently for each different antenna.Instead each data sequence is encoded and divided equally among the transmitting antennas by the Serial to Parallel block.The objective is to try to obtain some diversity for the same encoded block.In this paper we consider the frame structure of Figure 3 for a MIMO-OFDM system with N carriers.According to this structure the pilot symbols are multiplexed with the data symbols using a spacing of with T s denoting the symbol duration, N G the number of samples at the cyclic prefix ( , , ) and h T (t) the adopted pulse shaping filter.
A. Receiver Structure
To achieve reliable channel estimation and data detection we employ a receiver capable of jointly performing these tasks through iterative processing.The structure of the iterative receiver is shown in Figure 4 where N rx receiving antennas are employed.According to the figure, the signal, which is considered to be sampled and with the cyclic prefix removed, is converted to the frequency domain after an appropriate size-N DFT operation.If the cyclic prefix is longer than the overall channel impulse response the resulting sequence received in antenna n can be expressed as The sequences of samples (2) enter the MIMO equalizer (Spatial Demultiplexer block) which separates the simultaneous transmitted streams.This can be accomplished with an MMSE (Minimum Mean Squared Error) equalizer [13], a ZF (Zero Forcing) equalizer [13], a Maximum Likelihood Soft Output criterion (MLSO) or an interference canceller (IC) [11].It is possible to perform some of the receiver iterations using one spatial demultiplexing technique, like the MMSE, and the others using a different one, like the IC, as was studied in [11].In any case, after MIMO equalization the demultiplexed symbol sequences are serialized and pass through the demodulator, de-interleaver and channel decoder blocks.This channel decoder has two outputs.One is the estimated information sequence and the other is the sequence of log-likelihood ratio (LLR) estimates of the code symbols.These LLRs go through the Decision Device, which outputs either soft-decision or hard decision estimates of the code symbols, and enter the Transmitted Signal Rebuilder which performs the same operations of the transmitter (interleaving, modulation, conversion of serial to parallel streams).The reconstructed symbol sequences are then used for a refinement of the channel estimates and also for possible improvement of the spatial demultiplexing task (in case of employing an IC) for the subsequent iteration.
The possible MIMO equalization techniques are now going to be briefly described.Using matrix notation the MMSE estimates of the transmitted symbols in subcarrier k and OFDM block l is given by ( ) where , ˆk l S is the M tx ×1 estimated transmitted signal vector with one different transmit antenna in each position, , ˆk l H is the N rx ×M tx channel matrix estimate with each column representing a different transmit antenna and each line representing a different receive antenna, , k l R is the N rx ×1 received signal vector with one different receive antenna in each position and σ 2 is the noise variance.The ZF estimate can be simply obtained by setting σ to 0 in (3).
In the MLSO criterion we use the following estimate for each symbol , , , ˆm where s i corresponds to a constellation symbol from the modulation alphabet Λ, [ ] E ⋅ is the expected value, ( ) represents a probability and ( ) p ⋅ a probability density function (PDF).Considering equiprobable symbols ( ) where M is the constellation size.The PDF values required in (4) can be computed as , ˆ( ,:) 1 , e x p 2 2 where interf , k l
S
is a (M tx -1)×1 vector representing a possible combination of symbols transmitted simultaneously by all antennas except antenna m, s is a M tx ×1 vector composed by An IC can also be used inside the Spatial Demultiplexer block, but usually is only recommendable after the first receiver iteration [11].In this case, in iteration q, for each transmit antenna m and receive antenna n, the IC subtracts the interference caused by all the other antennas.This can be represented as where ( )
B. Channel Estimation
To obtain the frequency channel response estimates for each transmitting/receiving antenna pair the receiver applies the following steps in each iteration: (1) The channel estimate between transmit antenna m and receive antenna n for each pilot symbol position, is simply computed as where , , corresponds to a pilot symbol transmitted in the k th subcarrier of the l th OFDM block using antenna m.Obviously not all indexes k an l will correspond to a pilot symbol since 1 (2) Channel estimates for the same subcarrier k, transmit antenna m and receive antenna n but in time domain positions (index l) that do not carry a pilot symbol can be obtained through interpolation using a finite impulse response (FIR) filter with length W as follows: where t is the OFDM block index relative to the last one carrying a pilot (which is block with index l) and j t h are the interpolation coefficients of the estimation filter which depend on the channel estimation algorithm employed.There are several proposed algorithms in the literature like the optimal Wiener filter interpolator [14] or the low pass sinc interpolator [15].
(3) After the first iteration the data estimates can also be used as pilots for channel estimation refinement.
IV. NUMERICAL RESULTS
To study the behaviour of the proposed MIMO-OFDM scheme and respective iterative receiver, several simulations were performed for a 16-QAM (k 1 =0.4) hierarchical constellation.Two classes of bits with different error protection were used.
Each individual information stream was encoded with a block size chosen so that the final encoded and modulated stream fitted a sub-frame composed of 7 OFDM blocks (corresponding to a 0.5ms duration).All the parameters used for these simulations were based on UTRA LTE 3GPP documents [16] and [17], for a 10MHz bandwidth.Table 1 shows the respective parameters.The channel impulse response is based on Vehicular A environment [19] with Rayleigh fading assumed for the different paths.A velocity of 30 km/h was employed unless otherwise stated.The channel encoders were rate-1/2 turbo codes based on two identical recursive convolutional codes characterized by G(D) = [1 (1+D 2 +D 3 )/(1+D+D 3 )] [18].A random interleaver was used within the turbo encoders.Most of the BER (Bit Error Rate) results presented next will be shown as a function of E S /N 0 , where E S is the average symbol energy and N 0 is the single sided noise power spectral density.
For channel estimation purposes, pilot symbols were distributed using a spacing of 6 and 4 T N ∆ = or 7 (the two possible configurations proposed in [16]) and a sinc filter interpolation with length W=2 was used at the receiver.In the graphs legends, MPB designates most protected bits, IPB means intermediate protected bits and LPB corresponds to least protected bits.
Figure 5 compares the performance of the different receiver methods of Table 2 for a MIMO 2x2 transmission employing a 16-QAM (with k 1 =0.4) hierarchical constellation.It is visible that, although the receiver with the MMSE equalizer alone performs worse than when using the MLSO equalizer, the performance can be substantially improved and achieve lower BLERs than with the MLSO when applying also an IC in the last receiver iterations.For the remainder of the paper the receiver configuration considered will be method 2 (MMSE+IC).
Figure 6 shows the behaviour of the receiver for different velocities.According to the results, the performance is almost insensitive to velocity until 120 km/h, being visible only a small degradation in the performance of the LPB.For higher velocities the performance quickly degrades for the LPB but does not change significantly for the MPB even at 300 km/h. Figure 7 compares the performance of a MIMO 2x2 transmission employing a 16-QAM (with k 1 =0.4) hierarchical constellation with the two possible pilot spacings in the time domain.It is visible that both cases have similar performances (close to the perfect estimation curves), which means that it is possible to adopt the larger pilot spacing and therefore increase the transmission efficiency if one or two antennas (we verified that the conclusion is also true even for four transmit antennas).It is important to remember however that reducing the number of pilot symbols will sacrifice the system robustness for higher velocities as will be shown further ahead.
V. CONCLUSIONS
In this paper we have studied the use of QAM hierarchical constellations with the aim of supporting multicast and broadcast transmissions in a MIMO-OFDM system similar to the one being considered for UTRA LTE.
It was verified through simulations that the iterative receivers schemes studied are able to achieve good performances for all the bit streams, including those with lower error protection levels even for very high velocities.Therefore the proposed transmitter/receiver scheme can provide unequal error protection which is adequate for supporting MBMS transmissions in UTRA LTE.It was also observed that if a maximum of four transmit antennas are being used with 16-QAM hierarchical modulations, the option with only half of the pilots symbols proposed for UTRA LTE ( 7 ) is adequate, unless a higher robustness is desired for high velocities for all information streams.
S
the frequency domain.To avoid interference between pilots of different transmitting antennas, FDM (Frequency Division Multiplexing) is employed for the pilots, which means that pilot symbols cannot be transmitted over the same subcarrier in different antennas.Data symbols are not transmitted on subcarriers reserved for pilots in any antenna, therefore, the minimum allowed spacing in the frequency domain is () is the symbol transmitted by the k th subcarrier of the l th OFDM block using antenna m.The transmitted OFDM signals are then expressed as
Figure 3 .
Figure 3. Frame structure for a OFDM transmission (P -pilot symbol, Ddata symbol, T s -symbol duration).
H
denoting the overall channel frequency response between transmit antenna m and receiving antenna n for the kth frequency of the lth time block and , m k l N denoting the corresponding channel noise.
symbols estimates of the previous iteration for transmit antenna m, subcarrier k and OFDM block l. | 3,610.6 | 2008-06-03T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Bernstein-type Theorems in Hypersurfaces with Constant Mean Curvature
By using the nodal domains of some natural function arising in the study of hypersurfaces with constant mean curvature we obtain some Bernstein-type theorems.
INTRODUCTION
The Bernstein theorem on minimal surfaces x : M 2 → R 3 in the Euclidean space R 3 states that if x(M 2 ) is a graph over a plane P of R 3 which is defined for all points of P then M 2 is itself a plane.This beautiful result has been the basis of a large number of investigations on minimal surfaces.Among its generalizations is a theorem proved independently by (do Carmo & Peng 1979) and (Fischer-Colbrie & Schoen 1980) which states that if M 2 is complete and stable then it is a plane.
A generalization of this theorem for higher dimensions was obtained by (do Carmo-Peng 1980) as follows: Theorem A. Let x : M n → R n+1 be a minimal hypersurface.Assume that M n is stable, complete and that Then M n is a hyperplane in R n+1 .
Here A is the second fundamental form and B(R) is a geodesic ball of radius ball R centered at some fixed point in M.
Theorem A has been recently extended to hypersurfaces with constant mean curvature.A crucial point is to replace A by the traceless second fundamental form φ = −A + H I ; here H is the mean curvature of x : M n → R n+1 .The precise statement is as follows: Theorem B. (Alencar & do Carmo 1994a).Let x : M → R n+1 , n ≤ 5 be a complete noncompact hypersurface with constant mean curvature H . Assume that M is strongly stable (see definition in Section 1), and that Then M is a hyperplane in R n+1 .
In the present paper, we extend Theorem B in two directions.First we relax the growth condition on P (R) = B(R) |φ| 2 dM and extend Theorem B to this weaker condition.More precisely, we prove Theorem 1.Let M n be a strongly stable complete noncompact hypersurface of R n+1 (n ≤ 5) with constant mean curvature H .If P (r) ≤ Ce αH r , for some positive constants C, and α, where α depends on n given in the proof, then M is a hyperplane.
Next we improve the dimension condition from n ≤ 5 to n ≤ 6 and prove Theorem 2. Let M be a strongly stable complete noncompact hypersurface of R n+1 (n ≤ 6) with constant mean curvature H . Assume that Then M is a hyperplane.Theorem 1 is the main theorem of this paper and goes a long way towards getting rid of condition (0.1) in Theorem B. For its proof we need an auxiliary proposition that might be interesting by itself and states that the function |φ| on a hypersurface M n with constant mean curvature in R n+1 has no bounded nodal domain.
NOTATIONS AND PRELIMINARIES
Let M n be a complete noncompact hypersurface in R n+1 .Fix p ∈ M and choose a local unit normal field N. Define a linear map A: where X, Y are the tangent vector fields and ∇ is the standard connection on R n+1 .The map A can be diagonalized, i.e., there exists a tangent basis {e 1 , e 2 , • • • , e n } such that Ae i = k i e i , i = 1, 2, • • • , n.We then define the mean curvature H := 1 n n i=1 k i and the square of the second fundamental form It is well known that the above objects are independent of the choices made.
An. Acad.Bras.Ci., (2000) 72 (3) If M is minimal(H = 0), we say M is stable if for all piecewise smooth functions f : M → R with compact support, we have that here ∇f is the gradient of f in the induced metric.The notion of stability has been extended to hypersurfaces with constant mean curvature as follows: M is said to be strongly stable if (1.1) holds for all piecewise smooth functions f : M → R with compact support.M is said to be weakly stable if (1.1) holds for all piecewise smooth functions f : M → R with compact support and M f = 0.
Let x : M n → M n+1 be an isometric immersion of a complete, noncompact Riemannian ndimensional manifold M n into an oriented, complete, Riemannian (n + 1)-dimensional manifold, N a smooth unit normal field along M, and Ric(N ) the value of the Ricci curvature of M n+1 in the vector N .Here Ric(N ) = n i=1 K(e i ∧ N) (this is different from the normalized one).The Morse index ind M of M is defined as follows.Let L be the second order differential operator on M given by (1.2) Associated to L is the quadratic form defined on the vector space of functions f on M that have support on a compact domain K ⊂ M.
For each such K, define the index ind L K of L in K as the maximal dimension of a subspace where where the supremum is taken over all compact domains K ⊂ M. It is well known that ind(M) ≤ 1, if M is weakly stable(see, for example, (Fischer-Colbrie 1985)).
In what follows we always assume that M is a hypersurface in R n+1 with constant mean curvature H .To study the hypersufaces with constant mean curvature, it is convenient to modify slightly the second fundamental form and to introduce a new linear map φ : φ can also be diagonalized as: It is easily checked that tr φ = 0, and An. Acad.Bras.Ci., (2000) 72 (3) Thus |φ| 2 measures how far M is from being totally umbilic.For the rest of this section we follow (Alencar & do Carmo 1994a).Choosing an orthonormal principal frame {e i }, we can write where φ ij l are components of the covariant derivative of the tensor φ, and R ij ij is the sectional curvature of the plane {e i , e j }.By Gauss formula, we conclude that Since µ i = 0, it is easy to check that: From the above, it follows that In this case it follows by By using a lemma of Okumura (see (Alencar & do Carmo 1994b) for a proof), we have So we have finally (1.5) An. Acad.Bras.Ci., (2000) 72 (3)
A RESULT ON NODAL DOMAINS
In this section we prove a result on the nodal domains of |φ| which will be needed in our proof of main theorems.We first need to recall the definition of nodal domains.
Definition.An open domain D is called the nodal domain of function f if f (x) = 0 for x ∈ int D and vanishes on the boundary of ∂D.We denote by N(f ) the number of disjoint bounded nodal domains of f .Now we have the following lemma which follows directly from Proposition 2.2 below.We are indebted to the referee who provided its proof and corrected a mistake in our original version.
Proposition 2.2.Let (M, g) be Riemannian manifold and u ≥ 0 be a continuous function satisfying the following inequality of Simons' type in the distribution sense where a > 0 is a constant and ϕ is a continuous function on R.
Then u has no relatively compact nodal domain.
Proof.Suppose that u admits a relatively compact nodal domain D. Write q := ϕ(u) and v := log u on D. Thus (2.2) can be written as Then for any Lipschitz function f with support in D and vanishing at ∂D, we have Let f = wu, for some function w to be determined.We obtain For all b such that U/2 ≤ b ≤ U , where U := sup D u, we set An. Acad.Bras.Ci., (2000 When b goes to U , the first term of right hand side tends to 0 (because |∇u| 2 is integrable), while the second term is fixed.It follows that D (|∇f | 2 − qf 2 ) < 0 for all functions f = w b u, when b is close to U .These functions w b form an infinite dimensional vector which leads to a contradiction to the fact that D is relatively compact and q is continuous.
BERNSTEIN-TYPE THEOREMS
Before proving our main theorem, we need an auxiliary proposition.Set Proposition 3.1.Let M be a complete noncompact hypersurface of R n+1 (n ≤ 5) with constant mean curvature H (H = 0) and finite index.Assume that P (r) ≤ Ce αH r for some positive constants C, and α, where α is a constant that can expressed explicitly in terms of n.Then M |φ| 2 < +∞.
Our Theorem 1 is a corollary of the above proposition.It is a combination of the proposition and theorems in (Alencar &do Carmo 1994a) and(do Carmo &Peng 1980).Before proving Proposition 3.1 we give the proof of Theorem 1.
Proof of Theorem 1.To prove the conclusion of Theorem 1 we only need to show that H = 0 by Theorem A. Otherwise H = 0, and by Proposition 3.1 we know that M |φ| 2 < +∞.This is impossible by Theorem B. Thus the proof is complete.
We now prove the proposition: Proof of Proposition 3.1.Introduce f |φ| q+1 in the stability inequality (1.1).It has been shown in (Alencar & do Carmo 1994a) that for all > 0, where If M has finite index then it is stable outside some ball B(R).In (3.1), we choose q = 0; then A = 1 and So in this case we have It can be checked that when n ≤ 5, we can find sufficiently small > 0 such that 4C − B 2 > 0. So there exists a constant β which can expressed in terms of n such that for any piecewise smooth function f with compact support in M\B(R).Then (3.3) We claim that we can choose R large enough such that P (r) > 0 for all r > R. Otherwise we can find two positive constants r 1 < r 2 such that |φ|(x) = 0 when x ∈ ∂B(r i ).Thus B(r 2 )\B(r 1 ) contains a nodal domain and this contradicts Lemma 2.1.
Assume for the sake of the contradiction that P (+∞) = +∞.Then from our oscillation theorem in (do Carmo & Zhou 1999 Theorem 2.1) we have that for any λ > α 2 H 2 4 we can find x(t) which is not identically zero and is an oscillatory solution of Choose f (x) = x(r(x)) where r(x) is the distance function to some fixed point in M. We can find T 1 and T 2 , such that T 2 > T 1 > R and x(T 1 ) = x(T 2 ) = 0, x(t) > 0 for all t ∈ (T 1 , T 2 ).Now choose λ = ( α 2 4 + δ)H 2 , where δ > 0 is a constant such that β 2 − δ > 0 and set α < 2 β 2 − δ.It follows that This is a contradiction which shows our conclusion.
SOME FURTHER RESULTS
In this section we want to give some further related results.Using the eigenvalue estimate in (do Carmo & Zhou 1999) we can get an index estimate for hypersurfaces with nonzero constant mean curvature.
Define α(M) := lim sup r→+∞ log V (r) r where V (r) is the volume of geodesic ball B(r).It is easy to see that α(M) = 0 if M has polynomial volume growth.In order to prove this Theorem we need to use the eigenvalue estimate theorem proved by the authors in (do Carmo & Zhou 1999) which is now restated as follows.
Theorem.Let M be a complete noncompact Riemannian manifold with infinite volume and be an arbitrary compact subset of M. Then Proof of Theorem 4.1.It suffices to prove that for any natural number N we can find piecewise Note that from (Frensel, 1996) the volume of M is infinite, so from the Theorem we have: for any compact set in M.So we can find a compact domain D 1 such that λ 1 (D 1 ) ≤ α 2 4 < nH 2 .We also have λ 1 (M\D 1 ) ≤ α 2 4 < nH 2 .So we can find again a compact domain D 2 ⊂ M\D 1 such that λ 1 (D 2 ) ≤ α 2 4 < nH 2 .and λ 1 (M\(D 1 ∪ D 2 )) ≤ α 2 4 < nH 2 .Repeating this procedure, we can find disjoint compact domains D 1 , D 2 , • • • , D N , such that λ 1 (D i ) < nH 2 .
Let ϕ i be the positive first eigenfunction of M on D i , i.e.: ϕ i = λ i (D i )ϕ i in D i and ϕ i = 0 on ∂D i .We now define f i (x) := ϕ i (x) for x ∈ D i and f i (x) ≡ 0 for x ∈ M\D i .So Thus I (f i , f i ) < 0 for i = 1, 2, • • • , N. This shows that ind(M) ≥ N, for any N.So ind(M) = +∞.
The following is an easy consequence of Theorem 4.1.
Corollary 4.2.If M is complete noncompact hypersurface with nonzero constant mean curvature H and polynomial volume growth, then ind(M) = +∞.In particular, ind(M) = +∞, when M = S k × R n−k with the standard metric; here S k is a k-dimensional sphere in R k+1 . | 3,224.2 | 2000-09-01T00:00:00.000 | [
"Mathematics"
] |
Nuclear properties of loop extensions
The objectives of this paper is to give a systematic investigation of extension theory of loops. A loop extension is (left, right or middle) nuclear, if the kernel of the extension consists of elements associating (from left, right or middle) with all elements of the loop. It turns out that the natural non-associative generalizations of the Schreier's theory of group extensions can be characterized by different types of nuclear properties. Our loop constructions are illustrated by rich families of examples in important loop classes.
Introduction
A loop L is an extension of the loop N by the loop K if N is a normal subloop of L and K is isomorphic to the factor loop L/N . Extension theory deals with the classification of all possible extensions of N by K and studies their properties. The related problems in group theory are completely solved by Schreier theory of group extensions, cf. [9], [10], [4], Chapter XII, §48-49, pp. 121-131. But for loops A. A. Albert and R. H. Bruck proved in 1944 that construction of loop extensions of N by K has much more degrees of freedom, namely the multiplication function between different cosets = N of N can be prescribed arbitrarily. An interesting class of loop extensions of groups by loops is introduced in [5], where the multiplication of the extended loop is determined by an analogous formula as in the Schreier theory of groups. This paper contains characterizations and constructions of examples for interesting subclasses of such loops; these loops are called Schreier loops. In recent papers [6] and [7] a non-associative extension theory of Schreier type is investigated in a broader context, namely there are given characterizations of right nuclear automorphism-free extensions of groups by quasigroups with right identity element. As a consequence of these results, it turns out that the extension of a normal subgroup by the factor loop is isomorphic to an automorphism-free Schreier loop if and only if the normal subgroup is right and middle nuclear and there exists a left transversal to the normal subgroup (through the identity element of the loop) which commutes with this subgroup.
The aims of this paper are to find algebraic characterizations of Schreier loops and to examine the limits of the non-associative generalization of Schreier theory of extensions. In §2 we give the necessary definitions and formulate the basic constructions. §3 is devoted to the discussion of the interrelation between nuclear properties of normal subgroups in a loop and the corresponding Schreier extensions. Particularly, we show that for any Schreier extension this normal subgroup is the middle and right nuclear but in the general case it is not left nuclear. In §4 we introduce the notion of Schreier decomposition and show that a Schreier decomposition of a loop L is uniquely determined by a middle and right nuclear normal subgroup G, an isomorphism of a loop K to the factor loop L/G and by a left transversal to G through the identity element. §5 is devoted to the study of automorphisms of middle and right nuclear normal subgroups of a loop induced by middle inner mappings by loop elements. All of these maps are inner automorphisms if and ony if there exists a left transversal through the identity element to the subgroup commuting with this subgroup. In §6 we investigate different properties of Schreier decompositions of a loop. We give characterizations of loops having automorphism-free, respectively factor-free Schreier decompositions. §7 is devoted to the study of Schreier loops which are Schreier decompositions of the same loop with respect to the same normal subgroup.
Preliminaries
A quasigroup L is a set with a binary operation (x, y) → x · y such that for each x ∈ L the left and the right translations λ x : y → λ x y = xy : L → L, respectively ρ x : y → ρ x y = yx : L → L are bijective maps. We define the left and right division operations on L by (x, y) → x\y = λ −1 x y, respectively (x, y) → x/y = ρ −1 y x for all x, y ∈ L. A quasigroup L is a loop if it has a identity element e ∈ L. The right inner mappings of a loop L are the maps ρ −1 yx ρ x ρ y : L → L, x, y ∈ L. We will reduce the use of parentheses by the following convention: juxtaposition will denote multiplication, the operations \ and / are less binding than juxtaposition, and · is less binding than \ and /. For instance the expression xy/u · v\w is a short form of ((x · y)/u) · (v\w). The subgroups N l (L) = {u ∈ L; ux · y = u · xy, x, y ∈ L}, A loop L satisfies the left, respectively the right inverse property if there exists a bijection x → x −1 : L → L such that x −1 · xy = y, respectively yx · x −1 = y holds for all x, y ∈ L. The left alternative, respectively right alternative property of L is defined by the identity x · xy = x 2 y, respectively , respectively z(xy · x) = (zx · y)x for all x, y, z ∈ L. Any left (respectively right) Bol loop has the left (respectively right) alternative and inverse properties. Let K and N be loops with identity elements ǫ ∈ K and e ∈ N , and let be a family of quasigroup multiplications on N such that the equations e▽ ǫ,α x = x and x▽ α,ǫ e = x are fulfilled for any α ∈ K and x ∈ N . The multiplication of the pairs (α, a), (β, b) ∈ K × N determines a loop L ▽ on K × N with identity (ǫ, e). Clearly, L ▽ is an extension of the normal subloopN = {(ǫ, a); a ∈ N } by the loop K, whereN is isomorphic to N .
In the following we will discuss nuclear properties of normal subgroups of loops and the corresponding extensions of groups by loops. The following lemma allows us to consider a family of extensions such that the normal subgroupḠ = {(ǫ, a); a ∈ G} is right nuclear but not middle or left nuclear.
Lemma 1 Let G be a group with identity e ∈ G, K a loop with identity ǫ ∈ K and let ψ σ : G → G be bijective maps depending on σ ∈ K satisfying ψ σ (ǫ) = e for any σ ∈ K and ψ ǫ = Id. The multiplication (i) The subgroupḠ is right nuclear.
(ii)Ḡ is middle nuclear if and only if ψ σ : G → G is an automorphism of G for any σ ∈ K.
(iii)Ḡ is left nuclear if and only if the map
Proof. The assertions (i) and (ii) can be obtained by direct computation.
Putting a = e the assertion follows.
Schreier extension
In the following we consider a special case of Bruck's extension process, assuming that the extended multiplication has an analogous expression as in Schreier theory of group extensions (cf. [5]). Let G be a group with identity e ∈ G, K a loop with identity ǫ ∈ K and let Aut(G) denote the automorphism group of G. If σ → Θ σ is a mapping K → Aut(G) with Θ ǫ = Id and f : together with the divisions The maps defined by (ǫ, t) → t :Ḡ → G and τ → (τ, e)Ḡ : K → L(Θ, f )/Ḡ are isomorphisms. Clearly, L(Θ, f ) is an extension of the groupḠ by the loop K.
L(Θ, f ) is a group if and only if K is a group and the identities are satisfied, (cf. [4], §48).
Lemma 3 For any Schreier loop
( i)Ḡ is middle and right nuclear, Proof. The assertions follow from Proposition 3.2, and Propositions 3.7, 3.8 and 3.10 in [5].
Schreier decomposition
Lemma 5 Let L be a loop extension of the group G by the loop K and let Proof. Since the map σ → (σ, e)G : K → L/G is an isomorphism the image of the coset (σ, e)G is the coset {F ((σ, e)(ǫ, s)) , s ∈ G} = {F(σ, e)s, s ∈ G} and the assertion follows.
We notice that the isomorphism F : L(Θ, f ) → L satisfies F(ǫ, t) = t for any t ∈ G if and only if F is an extension of the isomorphism I :Ḡ → G defined by I(ǫ, t) = t.
Definition 6 Let K and L be loops, G a normal subgroup of L and L(Θ, f ) a Schreier loop defined on K×G. A Schreier decomposition of L with respect to its normal subgroup G is an isomorphism F : The underlying isomorphism of the Schreier decomposition F is the map σ → F(σ, e)G : K → L/G.
The following lemma shows that by investigation of Schreier decompositions of L with respect to its normal subgroup G the middle and right nuclear property of G is a reasonable assumption. Left transversals to G in L Lemma 9 If G is a middle and right nuclear normal subgroup of the loop L then the maps T x | G : G → G, induced by the middle inner mappings Proof. If π : L → L/G is the canonical homomorphism then π(T x (t)) = π(x)π(t)/π(x) = ǫ, x ∈ L, t ∈ G, is called the Schreier loop corresponding to the data pair (κ, Σ).
Corollary 13 A loop L has a Schreier decomposition with respect to a normal subgroup G if and only if G is middle and right nuclear.
Proposition 14 If a loop L satisfies one of the following conditions: then any middle and right nuclear normal subgroup of L is nuclear.
Proof. It follows from Theorem 12 that for a middle and right nuclear normal subgroup G of L there is a Schreier decomposition F : L(Θ, f ) → L with respect to G. According to Propositions 3.7, 3.8, respectively 3.10 in [5] a Schreier loop L(Θ, f ) having the left inverse, left alternative, respectively flexible property, satisfies the condition (2). In this case we obtain from Proposition 3.2.(i) in [5] that the normal subgroupḠ = {(ǫ, t); t ∈ G} of L(Θ, f ) is nuclear, and hence G is also a nuclear subgroup of L.
Now, we
give examples for Schreier loops having the right Bol property such that the normal subgroupḠ is middle and right nuclear, but not nuclear. These properties can be verified by easy computation.
Example 15 Let K be a right Bol loop, G a group and H the group generated by the right inner mappings ρ −1 τ σ ρ σ ρ τ : K → K, σ, τ ∈ K. Let χ : H → G be a homomorphism such that the image χ(H) is not contained in the center of G. Define the maps f : K × K → G and Θ : K → Aut(G) by and consider the corresponding Schreier loop L(Id, f ).
Example 16 Let K and G be groups. Assume that the group K is not abelian and denote by K ′ the commutator subgroup of K. Let φ : K ′ → G be a homomorphism such that the image φ(K ′ ) is not contained in the center of G. Define the maps f : K × K → G and Θ : K → Aut(G) by and consider the corresponding Schreier loop L(Id, f ).
Example 17 Let K and G be groups with identity ǫ ∈ K and e ∈ G, respectively. Assume that the group K is not abelian. Let φ : K → G be a homomorphism such that the image φ(K) is not contained in the center of G. Define the maps f : K × K → G and Θ : K → Aut(G) by (σ), u ∈ G, σ ∈ K ι s (t) = sts −1 , s, t ∈ G and consider the corresponding Schreier loop L(ι φ , e).
Automorphisms of G induced by elements of L
According to Lemma 9 the maps T x | G are automorphisms of a middle and right nuclear normal subgroup G of a loop L, where x is an arbitrary element of L. If r ∈ G then T r | G is the inner automorphism ι r (t) = rtr −1 , r, t ∈ G.
Lemma 18 If G is a middle and right nuclear normal subgroup of a loop L then the automorphisms T xr | G and T rx | G with x ∈ L and r ∈ G can be decomposed as Proof. Since s and r belong to N r (L), we have T xr (s)·xr = xr ·s = x·ι r (s)r = xι r (s)·r = T x (ι r (s))x·r = T x (ι r (s))(xr), hence the first assertion is true. Similarly, the second assertion follows from since s ∈ N r (L) and T x (s), r ∈ N m (L). Proof. Assume that for any x ∈ L the map T x | G is an inner automorphism. Let Σ be a left transversal of L/G and g : Σ → G a map satisfying g(e) = e and T x | G = ι g(x) for any x ∈ Σ. Clearly, the set
Corollary 19 If G is middle and right nuclear normal subgroup in
is a left transversal of L/G. According to Lemma 18, and hence Σ * ⊂ C L (G). Conversely, let Σ be a left transversal of L/G such that T x | G = Id G for all x ∈ Σ. Any element of L is a product x · r with x ∈ Σ, r ∈ G and hence Lemma 18 yields that T x·r | G = T x | G • ι r = ι r , i.e. T x·r | G is an inner automorphism of G.
Lemma 21 For a middle and right nuclear normal subgroup G in L the mapping T| G : L → Aut(G) is a homomorphism if and only if G is nuclear.
Proof. For any s ∈ G, x, y ∈ L we have s ∈ N r (L), T y (s) ∈ N m (L) and hence T xy (s) · xy = x · T y (s)y = xT y (s) · y = T x (T y (s))x · y.
It follows that T| G : L → Aut(G) is a homomorphism if and only if for any x, y ∈ L, s ∈ G one has T x (T y (s))x · y = T x (T y (s)) · xy. Since T x | G , T y | G : G → G are bijective maps, the map T| G : L → Aut(G) is a homomorphism if and only G is left nuclear.
It follows from Proposition 14 the following Proof. Using the second formula of (4) we obtain that the Schreier loop defined by (4) is factor-free if and only if the map l : K → L satisfies l στ \l σ l τ = e for any σ, τ ∈ K, and hence the map l : K → L is a loop homomorphism. It follows that L has a factor-free Schreier decomposition if and only if there exists a left transversal Σ of L/G which is a subloop of L.
The following assertion shows the alteration of the Schreier decomposition of a loop L with respect to a normal subgroup G, if we change the underlying isomorphism.
Proposition 26 Let L(Θ, f ) be a Schreier decomposition of L with respect to G with underlying isomorphism κ : K → L/G and let µ be an automor- Proof. We denote the multiplication of L( Θ,f ) by• and define the map Theorem 27 The mapsΘ : K → Aut(G) andf : K × K → G determined by the left transversal Σ(l) = {l σ n(σ) ∈ κ(σ), σ ∈ K} can be expressed bȳ | 3,841.6 | 2015-05-13T00:00:00.000 | [
"Mathematics"
] |
FORMULATION AND PHYSICAL STABILITY TESTING OF CREAM SCRUB PREPARATIONS FROM ETHANOL EXTRACT OF Nelumbo nucifera GAERTN FLOWER AND LEAF
Nelumbo nucifera is an aquatic plant that thrives in muddy and soggy soil, particularly in swampy environments. Nelumbo nucifera is utilized in traditional medicine for various purposes, including the management of diarrhea, tissue inflammation, and homeostasis. The flowers and leaves of Nelumbo nucifera contain many secondary metabolite chemicals, including flavonoids, alkaloids, tannins, and antioxidants. The objective of this study is to ascertain the feasibility of formulating a cream scrub using the ethanol extract of Nelumbo nucifera flowers and leaves. Additionally, the study attempts to discover if concentrations of 3%, 5%, and 7% of this extract can effectively moisturize the skin. This research technique is based on experimentation, involving the creation of simplicial, the production of extracts, the formulation of body scrub preparations using ethanol extracts of Nelumbo nucifera flowers and leaves, and the subsequent evaluation of these body scrub preparations. This study found that the moisture content of Nelumbo nucifera flower ethanol extract cream increased by 41.2% in F1, 46.5% in F2, and 52.9% in F3. The humidity percentage values for Nelumbo nucifera leaf extract cream were obtained as follows: F1 at 38.8%, F2 at 44.4%, and F3 at 47.7%. The ethanol extract derived from the flowers and leaves of Nelumbo nucifera can be prepared and used as a cream scrub. A cream scrub containing Nelumbo nucifera flower and leaf extract at concentrations of 3%, 5%, and 7% can effectively moisturize the skin.
Preparing Nelumbo nucifera Leaf and Flower Extract
The simplicia powder extract of Nelumbo nucifera leaves and flowers was macerated at 1:10, 600 grams of material to 6000 ml of solvent.The extract-making process is: Put 600 grams of simplicial powder in a jar.
Soaked in 4500 cc of 70% ethanol solvent.The container was covered with aluminium foil and left for 5 days, stirring occasionally, then filtered using filter paper to yield a filtrate and residue.
Remaceration with 25 parts of the remaining 1500 ml of 70% ethanol is then performed on the residue.After covering the container with aluminium foil, stir every two hours for two days.
After 2 days, the sample was filtered for residue and filtrate.Mix filtrate 1 and filtrate 2, then evaporate the 70% ethanol liquid extract with a rotary evaporator until thick [14].
Scrub Making Method
Gather tools and materials.
Spreadability Test
The spreadability test ensures the preparation is evenly dispersed on the culture.The spreadability criteria for topicals is 5-7 cm.After placing 1 g in the middle of a round glass covered with another, 50 g was added and left for one minute, and the spreadability diameter was measured.After adding 100 g and waiting one minute, the spreadability diameter was measured.After adding 150 g and waiting one minute, assess spreadability diameter.I finished until enough diameter was produced to see how the load affected preparation spreadability [18].
Irritation Test
The irritation test was carried out by applying the cream scrub preparation behind the ear to 12 volunteers and then observing for 15 minutes the symptoms that occurred.
The reaction observed is the occurrence of irritation on the skin or not [19].
Moisture Test
The effectiveness test was conducted on 15 volunteers and divided into five groups, namely: 1. Group I: 3 volunteers for the blank formula 2. Group II: 3 volunteers for 3% formula The skin condition was checked before and after using the scrub [20].
Stability Test (Cycling Test)
The stability test was conducted using the cycling test method.The cream scrub samples were stored at 4ºC for 24 hours and then transferred to a 40º oven for 24 hours (one cycle).The test was conducted for six cycles, and the physical changes, including organoleptics, homogeneity, and pH, were observed [21].
Hedonic Test
The hedonic test is carried out on aroma, physical appearance, texture, and comfort when using the preparation; the hedonic test is also called the test of a person's preference or preference for a product.
Evaluation of Cream Scrub Preparation Organoleptical Test
Tables 1 and Table 2; organoleptic tests of cream scrub preparations from ethanol extracts of seroma flowers and Nelumbo nucifera leaves were performed on three concentrations with blanks to determine shape, colour, and fragrance.
Test pH
The pH test results for the ethanol extract cream scrub from Nelumbo nucifera flowers and leaves carried out using a pH meter can be seen in Table 5 and Table 6.Nelumbo nucifera flower ethanol extract cream scrub pH measurements showed that the blank, F1, F2, F3, and F4 (positive control) had pH values of 6.7, 6.5, 6.5, and 6.4, respectively.The pH test on Nelumbo nucifera leaf ethanol extract cream scrub showed that the blank has a pH of 6.8, F1 has 6.7, F2 has 6.4,F3 has 6. (2022) research on cream scrubs using red seaweed extract at 5%, 10%, and 15% concentrations, the more extract added to the preparation, the more acidic the pH value and the pH decreases [24].
Scatterability Test
The spreadability test of the cream scrub preparation of ethanol extract of Nelumbo nucifera flowers showed an average of 5.6 cm on F0, 5.5 cm on F1, 5.4 cm on F2, 5.3 cm on F3, and 5.1 cm on F4.Nelumbo nucifera spreadability test findings on F0 average 5.6 cm, F1 is 5.6 cm, F2 is 5.4 cm, F3 is 5.3 cm, and F4 is 5.1 cm.
According to the spreadability test results of the cream scrub of ethanol extract of seroma flower, F0 had 5.6 cm spreadability, F1 5.5 cm, F2 5.4 cm, F3
Irritantation Test
In
Moisture measurement results
showing the percentage increase in skin moisture before week 1 and week 4 after use.
Skin is an organism's outer surface and separates it from the outside world.The skin protects tissue against chemical, physical, and mechanical harm and pathogens [1].Aging skin occurs with age.Many internal and external variables affect aging.Sunlight and other external factors can damage the skin.Skin conditions can be treated.Skincare can be internal or exterior [2tools or machines, while traditional care uses natural ingredients processed manually, such as fruit-based body scrubs, Cosmetics are products used on the skin, hair, nails, and external sexual organs, as well as the teeth and oral mucosa, to clean, perfume, change appearance, enhance body odour, and maintain health.Body scrubs are cosmetics that cleanse the body [3,4].Body scrubs, created from flowers and other plants, help keep skin healthy, smooth, and bright.Scrubs can remove weather-and pollution-induced grime from the skin, making it healthy, clean, and beautiful [5] There are two kinds of scrubs: standard scrubs and modern scrubs.Body scrubs from the past were made with rough ingredients like spices and flour.In the meantime, modern body scrubs are made from scrub granules and lotion, which is generally made from milk.Scrubs come in powder, cream, and whipped forms.It is common for cream scrubs to be formed like a paste or thick dough.They can be used directly on damp skin or skin that has been wet first.
were dissolved in hot water (massII).Add mass I too hot, dried mortar.Constantly crushing slowly added mass II produced a homogenous scrub mass.Add flower extract or Nelumbo nucifera leaves to the scrub at the prescribed concentration, crush again, and add used for the test.In a beaker, 1 gram of the substance is diluted in 100 ml of distilled water.The pH meter shows the number until constant on the left.The pH meter shows the preparation's pH.Three copies were tested for each formula.The cream scrub's pH must match the skin's 4.5-6.5 [17].
for moisture content using a moisture checker on their skin first, then given a cream scrub preparation from ethanol extracts of seroma Nelumbo nucifera and leaves on the marked volunteer's hand skin area.
cream), F2 (light brown), and F3 (brown and cream) smell like green tea.Nelumbo nucifera leaf ethanol extract cream scrub F1, F2, and F3 are light brown, brown, and dark brown, cream-shaped, and smell like green tea.Nelumbo nucifera flower and leaf extracts at different concentrations in each formula and in F3, which has 7% ethanol extract, produce a brown and dark brown color because the higher the concentration, the more intense the color.Research on cream scrubs by researcher on red guava leaf extract at 4%, 6%, and 8% showed a stronger and more intense color at 8% [22].on cream scrubs made from ethanol extracts of Nelumbo nucifera flowers and leaves shows that the preparations are homogeneous by not having poorly mixed parts.This is supported by research on rice husk activated charcoal with concentrations of 8%, 10%, and 12%.Susanna, no parts are unmixed when tested on glass [23].
3, and F4 (positive control) has 6.4.The pH of the preparation was lower than the blank preparation, which is safe for facial skin, as the concentration of ethanol extracts of Nelumbo nucifera flower and leaves increased.The acidity of Nelumbo nucifera flower and leaf extracts may affect this.CO2 entering the container during measurements reacts with water to lower pH.The cream scrub of ethanol extracts of Nelumbo nucifera flowers and leaves has a pH of F2 and F3, similar to the skin.According to Sopianti et al.
Table 1 .
Modified Formula for Nelumbo nucifera Flower Ethanol Extract Cream Scrub
Table 2 .
Modified Formula for Nelumbo nucifera Leaf Ethanol Extract Cream Scrub
Table 5 .
pH test of Nelumbo nucifera Flower Ethanol Extract Cream Scrub the irritant test on volunteers, | 2,168.6 | 2024-04-03T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
On decompositions of estimators under a general linear model with partial parameter restrictions
Abstract A general linear model can be given in certain multiple partitioned forms, and there exist submodels associated with the given full model. In this situation, we can make statistical inferences from the full model and submodels, respectively. It has been realized that there do exist links between inference results obtained from the full model and its submodels, and thus it would be of interest to establish certain links among estimators of parameter spaces under these models. In this approach the methodology of additive matrix decompositions plays an important role to obtain satisfactory conclusions. In this paper, we consider the problem of establishing additive decompositions of estimators in the context of a general linear model with partial parameter restrictions. We will demonstrate how to decompose best linear unbiased estimators (BLUEs) under the constrained general linear model (CGLM) as the sums of estimators under submodels with parameter restrictions by using a variety of effective tools in matrix analysis. The derivation of our main results is based on heavy algebraic operations of the given matrices and their generalized inverses in the CGLM, while the whole contributions illustrate various skillful uses of state-of-the-art matrix analysis techniques in the statistical inference of linear regression models.
Introduction
Consider a partitioned linear model with partial parameter restrictions M W ( y D XˇC " " " D X 1ˇ1 C C X kˇk C " " "; A 1ˇ1 D b 1 ; : : : ; A kˇk D b k ; E." " "/ D 0; D." " "/ D 2 † † †; where y is an n 1 vector of observable response variables, X D OE X 1 ; : : : ; X k is an n p matrix of arbitrary rank, X 1 ; : : : ; X k are k known n p 1 ; : : : ; n p k matrices with p D p 1 C C p k , A 1 ; : : : ; A k are given m 1 p 1 ; : : : ; m k p k matrices, respectively, with m D m 1 C C m k , b 1 ; : : : ; b k are m 1 1; : : : ; m k 1 known vectors, respectively. The system of linear equations in M is often available as extraneous information for the unknown parameter vectoř to satisfy which is an integral part of the constrained general linear model (CGLM) about the parameter space, and thus should ideally be utilized in any estimation procedure of the parameter space in (1). Associated with M are the following k submodels M i W y D X iˇi C " " " i ; A iˇi D b i ; E." " " i / D 0; D." " " i / D 2 † † †; i D 1; : : : ; k: ( Obviously, these models can be considered as reduced versions of M by deleting k 1 regressors except X iˇi , i D 1; : : : ; k. It has been realized that estimators of the unknown parameters in M and M i have some intrinsic connections, and people are interested in establishing certain additive decomposition of estimators under the partitioned model and its submodels. For convenience of representation, denote to rewrite linear models as certain partitioned forms, and then to make estimation and statistical inference under the partitioned linear models. One of the main objectives in the statistical inference of linear models is to establish various estimators of the parameter spaces in the models and to characterize mathematical and statistical properties and features of these estimators under various model assumptions. In this approach statisticians are often interested in the connections of different estimators and especially in establishing possible equalities between estimators. There have been various attempts to establish additive decomposition equalities for estimators under linear models. Under the assumptions in (9) and (10), it is natural to consider relations among the best linear unbiased estimators (BLUEs) of b Xˇin (9) and b X iˇi in (9) and (10). In this paper, we first verify or prove that under the assumptions that X 1ˇ1 ; : : : ; X kˇk , b X 1ˇ1 ; : : : ; b X kˇk are estimable in (9), the BLUE of Xˇin b M admits the following two additive decomposition identities In view of the above observations, we propose the following two additive decomposition equalities for the BLUEs of Xˇand b Xˇin b M: and then derive identifying conditions for the equalities to hold, respectively. These estimator decomposition identities have many different statistical interpretations and are not rare to see in statistical analysis of CGLMs. The problem on additive decompositions of BLUEs under general liner models was approached in [4,5]. Zhang and Tian [6] recently investigated the above two decomposition identities for k D 2 by using some effective algebraic methods of dealing with additive decompositions of matrix expressions and ranks/ranges of matrices. Before proceeding, we introduce the notation to the reader and explain its usage in this paper. R m n stands for the collection of all m n real matrices. The symbols A > , r.A/, and R.A/ stand for the transpose, the rank, and the range (column space) of a matrix A 2 R m n , respectively; I m denotes the identity matrix of order m. The Moore-Penrose inverse of A, denoted by A C , is defined to be the unique solution G satisfying the four matrix equations AGA D A, GAG D G, .AG/ > D AG, and .GA/ > D GA. Further, let P A , E A , and F A stand for the three orthogonal projectors (symmetric idempotent matrices) P A D AA C , E A D A ? D I m AA C , and F A D I n A C A. Two symmetric matrices A and B of the same size are said to satisfy the inequality A < B in the Löwner partial ordering if A B is nonnegative definite. Further information about the orthogonal projectors P A , E A , and F A with their applications in the linear statistical models can be found in [7][8][9]. Also, it is well known that the Löwner partial ordering is a surprisingly strong and useful property between two symmetric matrices. For more results about the Löwner partial ordering of symmetric matrices and applications in statistical analysis see, e.g., [8]. Generalized inverses of matrices are common tools to deal with singular matrices, which now are a fruitful and core part in current matrix theory and have profound impact in the field of statistics.
Some preliminaries in linear algebra
Statistical inference for linear models, as is well known, is entirely based on computations with the given vectors and matrices in the models, and formulas and algebraic tricks for handling matrices in linear algebra and matrix theory play an important role in the derivations of these estimators and the characterization of their performance. Because BLUEs of parameter spaces in linear models are calculated from given matrices and vectors in the models and are often represented by certain formulas composed by given matrices and vectors in linear models, the approach we take to the above problems is in fact to establish and characterize matrix equalities composed by matrices and their generalized inverses, and thus we need to use many influential and effective mathematical tools in order to characterize the above equalities of estimators and their covariance matrices under CGLMs. Many mathematical methods in statistical science require algebraical computations with vectors and matrices. In particular, formulas and algebraic techniques for handling matrices in linear algebra and matrix theory play important roles in the derivations and characterizations of estimators and their performances under linear models. As remarked in [10], a good starting point for the entry of matrices into statistics was in 1930s, while it is now a routine procedure to use given vectors, matrices and their generalized inverses in statistical models to formulate various estimators of parameter spaces in linear models and to make the corresponding statistical inferences.
As the study of additive decompositions of estimators in the contexts of linear regression models requires more effective mathematical analysis tools, it is forced toward algebraic questions that overlap with precise description and characterization of matrix decomposition identities in linear algebra. The scope of this section is to introduce various formulas for ranks of matrices in linear algebra suitable for establishing and characterizing various possible equalities for estimators under CGLMs. In this section, we first introduce some fundamental formulas for calculating ranks of matrices that will be used in the statistical analysis. Recall that the rank of matrix is conceptual foundation in matrix theory and is the most significant finite nonnegative integer in reflecting intrinsic properties of matrices, while the mathematical prerequisites for understanding the rank of matrix are minimal and do not go beyond elementary linear algebra. The intriguing connections between generalized inverses of matrices and rank formulas of matrices were recognized in 1970s, and a seminal work on establishing formulas for calculating matrices and their generalized inverses was presented in [11]. It has been known that matrix rank formulas are direct and effective tools of simplifying matrix expressions and equalities. The whole work in this paper is based on the effective use of the matrix rank methodology (MRM), which is a set of quantitative description techniques that encompass: I. establishing non-trivial analytical formulas for calculating the maximum and minimum ranks of a matrix expression, and using the ranks to determine the singularity and nonsingularity of the matrix expression, the rank invariance of the matrix expression, the dimension of the row/column space of the matrix expression; II. establishing formulas for calculating the rank of the difference of two matrix expressions, and using them to derive necessary and sufficient conditions for the two matrix expressions to be equal, i.e., proving matrix equality by matrix rank formulas; III. characterizing relations between two linear subspaces, or two matrix sets by matrix rank formulas.
The above assertions show that there are important and peculiar consequences of establishing various formulas for calculating ranks of matrices from theoretical point of view. Thus, the MRM in fact provides us with a specified algebraic framework for tackling matrix expressions and matrix equalities, and gives a glimpse into a very broad and interesting field of matrix mathematics. But it was not until a few decades ago that the MRM was essentially recognized as an effective and influential tool in the field of mathematics and was extensively applied in matrix theory and applications. Because matrices are common objects in linear regression analysis, the advent of the MRM has greatly extended from the domain of matrix theory into statistical areas, some seminal work on the fundamental theory of the MRM and its applications in statistics can be found in e.g. in [11][12][13]. Some recent work on the MRM in the analysis of additive decompositions of BLUEs under linear models were presented in [4][5][6], while some contributions on MRM in the statistical analysis of CGLMs can be found in [14][15][16][17][18][19][20][21][22][23][24].
In order to establish and characterize various possible equalities for estimators in the context of linear models and to simplify various matrix equalities composed by Moore-Penrose inverses of matrices, we will need the following well-known rank formulas involving Moore-Penrose inverses to make the paper self-contained.
Furthermore; the following results hold: With the support of the formulas in Lemmas 2.1-2.3, we are able to covert the problems in (11)- (14) into certain algebraic problems characterizing matrix equalities composed by the given matrices in the models and their generalized inverses, and to derive analytical solutions of the problems by using the methods of matrix equations, matrix rank formulas, and various skillful partitioned matrix calculations.
Estimability of parameter spaces under CGLMs
We take 2 D 1 in (1)-(10) for the convenience of presentation below, because it doesn't play any role in the main results in this paper. In what follows, we assume that the model in (9) is consistent, i.e., see [27,28]. We next introduce the definitions of the estimability of parameter spaces in CGLMs.
M be as given in (9) and let K 2 R k p be given. Then; the vector Kˇof the unknown parameters It is well known in statistical theory that the unbiasedness of linear statistics with respect to given parameter spaces in linear models is an important property. Considerable literature exists on estimability of parameter spaces in linear models; see e.g. [29][30][31][32][33][34][35][36][37][38] for some excellent expositions. We next present some classic and new results on the estimability of the parameter space in (9) and give their proofs. 29]). Let b M be as given in (9) and let K 2 R k p be given: Then; the following results hold: M; i D 1; : : : ; k: M be as given in (9): Then; the following statements are equivalentW (a) All X 1ˇ1 ; : : : ; X kˇk are estimable under b M: Proof. It is obvious from (7) that Hence if (c) holds, we obtain from (22) that which means that (a) and (b) hold by Lemma 3.2. The equivalence of (c) and (23) can be proved by induction, we leave it to the reader.
BLUEs' computations
Theoretical and applied researches of a CGLM seek to develop various possible estimators of the parameter space in the CGLM. When there exist unbiased estimators for a given parameter space, there are usually many unbiased estimators for the parameter space. Thus, it is natural to seek such an unbiased estimator that has the smallest dispersion matrix among all the unbiased estimators, that is to say, the unbiasedness and smallest dispersion matrices of estimators are most intrinsic requirements in statistical analysis and inference. The concepts of BLUEs of parameter spaces in the contexts of (1)-(10) are given below.
Definition 4.1. Let b M be as given in (9); and assume that Kˇis estimable under b M for K 2 R k p . If there exists an L 2 R k .mCn/ such that E.Lb y Kˇ/ D 0 and D.Lb y Kˇ/ D min (24) hold in the Löwner partial ordering, the linear statistic Lb y is defined to be the BLUE of Kˇunder b M, and is denoted by Estimators of the parameter spaces in linear models are usually formulated from mathematical operations of the observed response vectors, the given model matrices, and the covariance matrices of the error terms in the models. Hence, the standard inference theory of linear statistical models can be established from the exact algebraic expressions of estimators, which is easily acceptable from both mathematical and statistical points of view. In fact, linear statistical models are the only type of statistical models that have complete and solid support from linear algebra and matrix theory. Observing that (9) is a special case of GLMs, the following lemma follows from the well-known results on the BLUEs under linear models; see e.g. [28, p. 282] and [39, p. 55].
Lemma 4.2. Let b M be as given in (9); assume that Kˇis estimable under b M for K 2 R k p ; and denote t D n C m: Then; the following results hold: (a) The following implication M can be written as Cov Cov where V i 2 R t t is arbitrary; i; j D 1; : : : ; k: (c) The following two decomposition identities hold Proof. Results (a) and (b) follow directly from (8) and (28) by letting K D Y i ; b Y i ; respectively. Result (c) follows directly from (7), (29), and (30).
In what follows, we use fBLUE b M .Kˇ/g to denote the collection of all BLUE b M .Kˇ/ in (28).
Additive decompositions of BLUEs under a full CGLM and its submodels
For convenience of representation, we adopt the notation in this section. 0 X 2 X k X 1 0 X k : : : : : : : : : : : : 5; i ¤ j; i; j D 1; : : : ; k: The misspecified BLUEs under the submodels in (10) are given below. where H i 2 R n t and G i 2 R t t are arbitrary matrices; i D 1; : : : ; k: It should be pointed out that under the assumptions in (9), the k submodels in (10) are misspecified versions of (9). So that the estimators in (44) and (45) are not true BLUEs of X iˇi and b X iˇi under the models in (10), that is to say, they neither are unbiased for X iˇi and b X iˇi under (9), nor have the smallest covariance matrices in the Löwner sense. In such a case, the sums of the BLUEs may, however, be the BLUEs of Xˇand b Xˇunder some conditions. In this section, we derive some algebraical and statistical properties and features of the BLUEs under (9) and (10), and then give necessary and sufficient conditions for the equalities in (13) and (14) to hold. Although the results in the last section present exact formulas of BLUEs under various assumptions, we have to pay more attention to the mathematical manipulations hidden behind the BLUE formulas in order to establish the connections among the BLUEs. During this process, many skillful calculations of matrix ranks and elementary block matrix operations will be conducted in establishing and simplifying matrix equalities and expressions.
Concerning the relations between BLUE
. b X iˇi /, i D 1; : : : ; k; we have the following conclusions.
It can be seen from (47) and (49) that neither the sum BLUE b . b X kˇk / is necessarily unbiased for b Xˇunder (1). Concerning the unbiasedness of the two sums and the corresponding BLUE decompositions, we have the following general conclusions. . b X iˇi /; i D 1; : : : ; k; such that It should be pointed out that many exclusive and tricky methods for establishing and simplifying matrix expressions and matrix equalities have been developed in linear algebra and matrix theory, which have greatly benefited both mathematics and applications. In particular, these new methodologies have also found essential applications in statistical analysis, such as establishing various intriguing and sophisticated formulas, equalities, and inequalities associated with estimators under linear statistical models. | 4,326 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
Whole genome sequencing for improved understanding of Mycobacterium tuberculosis transmission in a remote circumpolar region
Few studies have used genomic epidemiology to understand tuberculosis (TB) transmission in rural and remote settings – regions often unique in history, geography and demographics. To improve our understanding of TB transmission dynamics in Yukon Territory (YT), a circumpolar Canadian territory, we conducted a retrospective analysis in which we combined epidemiological data collected through routine contact investigations with clinical and laboratory results. Mycobacterium tuberculosis isolates from all culture-confirmed TB cases in YT (2005–2014) were genotyped using 24-locus Mycobacterial Interspersed Repetitive Units-Variable Number of Tandem Repeats (MIRU-VNTR) and compared to each other and to those from the neighbouring province of British Columbia (BC). Whole genome sequencing (WGS) of genotypically clustered isolates revealed three sustained transmission networks within YT, two of which also involved BC isolates. While each network had distinct characteristics, all had at least one individual acting as the probable source of three or more culture-positive cases. Overall, WGS revealed that TB transmission dynamics in YT are distinct from patterns of spread in other, more remote Northern Canadian regions, and that the combination of WGS and epidemiological data can provide actionable information to local public health teams.
Introduction
Canada's tuberculosis (TB) rate has been decreasing overall, yet rates remain elevated in particular populations and regions. Recent outbreaks in two areas of Canada's North -Nunavik and Nunavutresulted in annual incidence rates higher than many low-income countries [1,2]. However, this is not the case in all circumpolar settings, where public health efforts have contributed to declining TB rates. From 2006 through 2012, Yukon Territory (YT) reported a rate of 12.1 cases per 100 000 population. While this is over twice the national average of 4.8 cases/100 000, it is the lowest rate amongst Canada's Northern territories (25.4/100 000 in the Northwest Territories, immediately east of YT, and 194.3/100 000 in Nunavut) [2,3]. Alaska, located west of YT, has seen a sharp decrease in cases over the last few decades, reporting an average incidence of 8.1/100 000 (2006)(2007)(2008)(2009)(2010)(2011)(2012), with most cases concentrated in rural communitiesmany inaccessible by road [2,4]. Thus, while northern remote settings are often viewed similarly by population and public health programmes, it is clear that with respect to TB, there are significant differences across these regions, likely explained by a combination of the robustness of regional public health, access to appropriate housing, geography, intra-community movement and the populations themselves [5]. Understanding the unique epidemiology of TB in each region is therefore vital to delivering tailored interventions to drive rates in circumpolar settings closer to the World Health Organization's elimination goals.
Genotyping programmes have provided significant insights into the molecular epidemiology of TB in many low-incidence countries, helping to detect outbreaks [6,7], and more recently, genome sequencing has dramatically improved our understanding of both clustering and TB transmission in communities worldwide [8][9][10][11]. However, only two studies to date have used this genomic epidemiology approach to examine transmission in remote Northern locations: one in Nunavik, Québec [12] an Arctic region of Canada's North, and a second in Greenland, which used genomics to detect 'hotspot cases' responsible for chains of transmission [13]. To better understand the patterns of TB transmission in YT, we sequenced Mycobacterium tuberculosis (Mtb) genomes from all culture-positive TB diagnoses in YT over a 10-year periodthe first genomic epidemiology study of TB in this region. Recognising that in contrast to many other northern regions in Canada, year-round highway access and multiple airports facilitate travel between YT and its southern neighbour, British Columbia (BC), we also examined YT Mtb genomes in the context of Mtb genomes sequenced in BC during the same time period. This unique cross-border comparison is possible as the BC Centre for Disease Control (BCCDC) and the BC Public Health Laboratory (BCPHL) are contracted by YT to provide TB services such as laboratory diagnostics and case management support, and both jurisdictions access a shared data repository, thus allowing us to identify the chains of transmission within and across YT/BC borders, and to fully describe the genomic epidemiology of TB in this remote circumpolar region.
Study setting and design
YT is the sparsely populated (0.1 persons/km 2 ) [14], most Northwestern region of Canada, immediately north of BC. All TB cases diagnosed in YT are reported to the Yukon Communicable Disease Control (YCDC), and those in BC to the BCCDC. Care and treatment of individuals diagnosed with TB is the responsibility of YCDC, in partnership with Yukon Government Community Nursing and includes contact investigations (CIs) for newly diagnosed cases. The BCPHL receives all Mtb isolates for both YT and BC, and conducts routine diagnostic testing, universal 24-locus Mycobacterial Interspersed Repetitive Units-Variable Number of Tandem Repeats (MIRU-VNTR) genotyping, and whole genome sequencing (WGS) on request. The study population ( Fig. 1) included all YT culture-positive TB cases from 2005 through 2014 (n = 32), which were compared with TB cases diagnosed in BC during the same time period (n = 2292), for which the BC study population has been previously described [15].
Ethics approval for this study was granted by the University of British Columbia (certificate #H12-00910).
Case-level information
Case-level clinical and demographic data, as well as epidemiological data collected during routine CIs, for all TB cases from BC and YT were extracted from the integrated Public Health Information System (iPHIS). To classify community type for BC cases into metro (>190 000), urban/rural (40 001-190 000), rural (10 001-40 000) and remote (⩽10 000) groups, we used the population density of the geographic service area in which each case resided. YT community types were classified by home postal code, with the second digit '1' in the forward sortation area indicating urban/rural, and a '0' indicating a remote community.
Laboratory methods
All Mtb isolates were obtained from specimens submitted to the BCPHL for routine diagnostic and phenotypic susceptibility testing. Isolates were revived from archived frozen stocks, DNA was extracted and 24-locus MIRU-VNTR genotyping was performed as previously described [15]. Isolates lacking an amplicon peak at any locus were repeated with newly extracted DNA, and where there remained no peak at a single locus, the locus was coded as missing data and included in the analyses. All 32 culture-positive isolates of 38 notified cases in YT during the study period were successfully genotyped. These results were compared to genotypes of all culture-positive Mtb isolates from BC over the same period [15]. WGS was completed for all 32 YT isolates as well as 1284 BC isolateswhich included all isolates genotypically clustered by MIRU-VNTR to a YT isolate. WGS was completed using 125 bp paired-end reads on the Illumina HiSeqX platform at Canada's Michael Smith Genome Sciences Centre (Vancouver, BC).
WGS analysis
The bioinformatics pipeline developed by Oxford University and Public Health England was used to analyse the resulting fastq files [16]. Reads were aligned to the Mtb H37Rv reference genome (GenBank ID: NC000962.2), with an average of 92% of the reference genome covered. Single-nucleotide variants (SNVs) were identified across all mapped non-repetitive sites. Genomic clusters were defined independently of MIRU-VNTR clusters and a unique identifier (WClustID) was assigned where isolates differed by ⩽5 SNVsa threshold reflecting recent local transmission [9]. Concatenated SNVs combined with epidemiological data collected through routine CIs and consultation with YCDC public health authorities were used to generate temporal transmission networks. Major lineage was predicted for each sequenced isolate based on lineage-defining SNVs [17], and in silico antibiotic resistance was predicted as previously described [18]. Fastq files for all genomes are available at NCBI under BioProject PRJNA413593 and PRJNA49659.
Statistical analysis
We calculated descriptive statistics for basic demographic and clinical information across two categories: (i) all cases diagnosed within YT, and (ii) cases diagnosed in BC residents within five SNVs of a YT case and classified as 'Related' (BC R ). Univariable analysis used the t-test for comparisons of mean age, and categorical variables were compared using χ 2 or Fisher's exact test where appropriate. The frequency for which a MIRU-VNTR pattern was observed within the YT and/or BC R populations was described, and to place MIRU-VNTR genotypes in the wider context of BC as a whole, we also compared genotypes to BC isolates not closely related to YT isolates based on genomic distance thresholds (>5 SNVs) and classified these as 'Not Related' (BC NR ). A dendrogram based on 24-locus MIRU-VNTR genotyping patterns was generated using the categorical (Hamming) distance and UPGMA (unweighted pair group method with arithmetic mean) algorithm. All statistical analyses were done in R v3.4.1.
MIRU-VNTR and WGS provide different estimates of clustering
From 2005 through 2014, 32 individuals were diagnosed with culture-positive TB in YT. MIRU-VNTR genotyping grouped 21 of these cases into three clusters (3-13 YT isolates/cluster), yielding a clustered proportion of 65.6% within the territory. One YT isolate had an untypable locus yet matched a cluster unique to YT for the other 23 typable loci. Six YT isolates had MIRU-VNTR patterns that were unique amongst the YT population yet clustered with isolates in BC, bringing the total number of MIRU-VNTR clusters across both jurisdictions containing at least one YT case to nine (Fig. 2). Four YT isolates remained unclustered after comparison with all BC isolates. All four were within one or two loci of a YT and/or BC genotype cluster.
Genomics provided a higher resolution view of clusters suggestive of recent transmission, merging several MIRU-VNTR clusters that differed by a single locus or had an untypable locus into single groups supported by CI data, and in other cases revealing that MIRU-VNTR clustered isolates, such as those belonging to MClust-023, were not truly clustered in a way that would suggest recent local transmission (Fig. 2). Using a five SNV threshold, we identified six genomic clusters with at least one YT case, involving a total of 28 YT and 101 BC R isolates and ranging from two to 59 isolates (Fig. 3). Another YT isolate was within 20 SNVs of a genomic cluster, while the remaining three isolates were >200 SNVs away from any other YT isolate. By WGS, the clustered proportion was 28/32 (87.5%) when YT isolates were considered alongside BC isolates, and 25/32 (78.1%) considering only isolates among YT residents. With the exception of two Indo-Oceanic lineage isolates, all other YT isolates (94.1%) belonged to the Euro-American lineage. One of the Indo-Oceanic lineage isolates was phenotypically resistant to isoniazid (0.4 µg/ml) due to a katG S315T mutation, while the remaining isolates were susceptible to all first-line antibiotics.
Genomically related cases across jurisdictions are similar clinically
Comparing all YT cases to the genomically related BC R cases (n = 101), we found similar characteristics across both populations, including the mean age of 45.8 years (standard deviation (S.D.) ± 16.7) and 46.8 years (S.D. ± 11.9) for YT and BC R individuals, respectively. Both groups were predominantly Canadian-born, with 93.8% of the YT study population and 88.9% of BC R persons born in Canada ( Table 1). The proportion of individuals with a clinical presentation associated with TB transmission was high in the YT and BC R populations, with respiratory TB diagnosed in 90.6% of YT and 89.1% of BC R individuals. Likewise, the smear-positive TB proportion was high ->82% in YT and BC R persons. Of note, the proportion of individuals with cavitary TB was over 1.5× higher in the YT population compared with BC R individuals, with cavitary disease in 37.5% (12/29) of YT persons (P = 0.099). With respect to risk factors for transmission [19], the majority of individuals (YT: 71.9%, BC R : 61.5%) reported ⩾1 risk factor (HIV, illicit drug use or alcohol misuse). Reflecting the differing demographics between the two settings, the majority of YT individuals resided in remote (84.4%) regions, compared with those in BC R where the majority resided in metro areas (82.2%).
Transmission reconstruction
To characterise person-to-person spread of TB within YT, we constructed temporal transmission networks using WGS results combined with epidemiological data for the three genomic clusters with transmission between or to numerous YT persons -WClust-1, WClust-9 and WClust-19 (Fig. 4). Although Mtb isolate YT13 is above the five SNV threshold set for recent transmission, it is within 18 SNVs of WClust-19a cluster genotypically and genomically unique to the YT populationand was therefore included in the reconstruction figure, recognising this case likely represents reactivation of a previously acquired infection with a strain circulating within YT. For WClust-1, a large cluster with discrete minimum spanning tree branches in both YT and BC, we included only the branch of YT isolates, together with the two closely related BC isolates (Fig. 3).
Each of the three clusters differs slightly. WClust-19 is the only cluster exclusively comprising YT individuals, whereas WClust-1 and WClust-9 had one or more BC persons with related isolates. Within WClust-1 the BC cases may have acquired TB from a YT individual, whereas in WClust-9 a BC individual likely transmitted TB to a number of BC and YT cases. SNV distances ranged within clusters; however, WClust-19 saw no genomic variation in the transmission chain stemming from YT8, despite up to 6 years between disease acquisition and diagnosis. WClust-9 has four BC isolates 0-5 SNVs from those in YT (Fig. 4). However, with the exception of BC2, there are no known epidemiological connections between these cases that would suggest a common source not identified through CIs.
WClust-1 represents the largest YT cluster. CIs revealed that many of the individuals were social contacts of one another, with at least two individuals suspected of giving rise to multiple secondary cases. Here, genomics identified a minority variant (at the SNV site, 15% of reads had adenine (A) and 85% were cytosine (C)) in YT18, whereas in the subsequent cases, the SNV was fully fixed, confirming this individual as the most likely source for the cases that followed (online Supplementary Fig. S1). Genomic data also confirmed the inclusion of three Yukon (YT23, YT25 and YT27) and two BC isolates (BC31 and BC49) in this cluster, despite no apparent epidemiological linkages to each other or other cluster members.
While each of the three genomic clusters had unique features, all had at least one individual source of multiple culture-positive secondary cases, and all spanned several years, with some individuals progressing rapidly to active disease, and others reactivating after a long period of latency.
Discussion
We describe the genomic epidemiology of TB in Northwestern Canada over a 10-year period, finding that persons diagnosed with TB were largely Canadian-born with Euro-American lineage isolates, with nearly all cases attributable to transmission within Canada, consistent with the epidemiology of TB elsewhere in Canada's North [1,12]. Genomic data, combined with detailed epidemiological data, allowed us to reconstruct likely transmission pathways among the three large clusters. We found that, as is true for a number of infectious diseases, a small number of individuals account for a disproportionate number of secondary casesthe phenomenon of 'super-spreaders' [20]. Understanding the risk factors and epidemiological characteristics driving super-spreading in a community is important for better prioritising TB prevention and care programmes. In our YT study population, the proportion of individuals with clinical risk factors frequently associated with transmission, such as cavitary disease and smear positivity [19], was quite high, and anecdotal evidence from the local public health team suggested that delays in diagnosis might have also contributed to transmission. A recent publication [21] discussed the various drivers of TB transmission outside clinical risk factors, including diagnostic delays, which increase the potential for disease progression and transmission [22,23], particularly amongst highly mobile, socially connected and infectious individuals.
Given the shared border between YT and BC, we also examined transmission across jurisdictions. Including genomically related BC isolates increased our estimate of clustering for YT isolates, suggesting that estimates derived from individual provincial or territory data alone likely underestimate transmission rates in relation to remote settings. Cross-border transmission appears to occur in both directionsin several cases YT residents likely transmitted to BC residents via social/community connections with YT residents reporting travel/residential histories in both Northern BC communities and larger metropolitan regions. Additionally, three YT cases had isolates that clustered only with BC isolates and likely acquired their infections within BC, while a BC source was linked to six YT cases in WClust-9. Previous studies [12,13] of TB transmission in circumpolar settings saw genomically clustered isolates localised to specific communities; here, we observe the opposite, with transmission occurring across geographic boundaries. This underscores the notion that not all circumpolar TB transmission is the same, and while community-level interventions may be appropriate for some settings, investigating TB transmission in settings like YT requires intra-jurisdictional cooperation and multi-sectorial interventions.
Given the low genomic variation between cases, with most cases differing by 0-1 SNVs, our cluster reconstructions were only possible thanks to the detailed epidemiological information collected by the local public health team. Such minimal variation across multiple hosts over many years is not uncommon, and has been previously described in outbreaks elsewhere in Canada [11]. Our observation reinforces the need for comprehensive CI data coupled to genomics to fully understand regional epidemiology, though it is important to note that because genomic studies currently require Mtb culture, culture-negative TB cases are excluded from reconstructions. These cases are less likely to contribute to transmission due to low bacterial loads but cannot be completely excluded. TB diagnoses prior to the study period are also not captured here.
Understanding TB transmission dynamics is a key to the design and delivery of effective evidence-based interventions to prevent the continuing spread of TB, and WGS will be an integral part of future investigations into the unique patterns of TB spread in a given region. It offers more focused epidemiological information than traditional laboratory methods, such as MIRU-VNTR, with a faster turn-around-time and at roughly the same costs, and enables in silico resistance prediction [24]. It also permits distinguishing between reactivation of a historically acquired latent TB infection and TB resulting from recent transmission [25]. This is particularly important in small populations with isolates sharing high degrees of genotypic relatedness, such as YTs, where CI alone may not be able to differentiate these two scenarios. Nevertheless, disparities in basic laboratory services, turn-aroundtimes and access to new technologies often exist between circumpolar territories and the rest of the country and it is likely to be some time before WGS moves out of the specialised reference laboratory landscape and into real-time use in remote settings. In the interim, a commitment by reference laboratories supporting these regions is needed to ensure that Mtb isolates from remote territories are included in WGS efforts, which will help bridge the gap and provide the same opportunity as the rest of the country to impact public health management of TB cases.
While the immediate impact of WGS on contact-tracing practices and annual TB incidence rates is yet to be seen, as we collect more data and build more transmission networks for a given region, we can begin to understand the fine-scale trends driving transmission, and deploy interventions targeted to meet the region's specific needs. These may include: (i) targeted messaging to clinicians in remote settings to 'think TB', many of whom may not have seen a case of TB; (ii) enhanced CIs around individuals whose presentation and molecular epidemiology is consistent with that of a super-spreader; and (iii) implementing a framework to facilitate working across regional or provincial jurisdictions to jointly manage outbreakssupported through the use of genomics. We therefore recommend routine WGS of TB cases from circumpolar regions to better understand the unique regional dynamics driving transmission and assess ongoing levels of transmission in these settings. | 4,550 | 2019-05-09T00:00:00.000 | [
"Medicine",
"Biology"
] |
Age-Related Inflammatory Balance Shift, Nasal Barrier Function, and Cerebro-Morphological Status in Healthy and Diseased Rodents
Increased blood–brain barrier (BBB) permeability and extensive neuronal changes have been described earlier in both healthy and pathological aging like apolipoprotein B-100 (APOB-100) and amyloid precursor protein (APP)–presenilin-1 (PSEN1) transgenic mouse models. APOB-100 hypertriglyceridemic model is a useful tool to study the link between cerebrovascular pathology and neurodegeneration, while APP–PSEN1 humanized mouse is a model of Alzheimer’s disease. The aim of the current study was to characterize the inflammatory changes in the brain with healthy aging and in neurodegeneration. Also, the cerebro-morphological and cognitive alterations have been investigated. The nose-to-brain delivery of a P-glycoprotein substrate model drug (quinidine) was monitored in the disease models and compared with the age-matched controls. Our results revealed an inflammatory balance shift in both the healthy aged and neurodegenerative models. In normal aging monocyte chemoattractant protein-1, stem cell factor and Rantes were highly upregulated indicating a stimulated leukocyte status. In APOB-100 mice, vascular endothelial growth factor (VEGF), platelet-derived growth factor (PDGF-BB), and interleukin-17A (IL-17A) were induced (vascular reaction), while in APP–PSEN1 mice resistin, IL-17A and GM-CSF were mostly upregulated. The nasal drug absorption was similar in the brain and blood indicating the molecular bypass of the BBB. The learning and memory tests showed no difference in the cognitive performance of healthy aged and young animals. Based on these results, it can be concluded that various markers of chronic inflammation are present in healthy aged and diseased animals. In APOB-100 mice, a cerebro-ventricular dilation can also be observed. For development of proper anti-aging and neuroprotective compounds, further studies focusing on the above inflammatory targets are suggested.
Increased blood-brain barrier (BBB) permeability and extensive neuronal changes have been described earlier in both healthy and pathological aging like apolipoprotein B-100 (APOB-100) and amyloid precursor protein (APP)-presenilin-1 (PSEN1) transgenic mouse models. APOB-100 hypertriglyceridemic model is a useful tool to study the link between cerebrovascular pathology and neurodegeneration, while APP-PSEN1 humanized mouse is a model of Alzheimer's disease. The aim of the current study was to characterize the inflammatory changes in the brain with healthy aging and in neurodegeneration. Also, the cerebro-morphological and cognitive alterations have been investigated. The nose-to-brain delivery of a P-glycoprotein substrate model drug (quinidine) was monitored in the disease models and compared with the age-matched controls. Our results revealed an inflammatory balance shift in both the healthy aged and neurodegenerative models. In normal aging monocyte chemoattractant protein-1, stem cell factor and Rantes were highly upregulated indicating a stimulated leukocyte status. In APOB-100 mice, vascular endothelial growth factor (VEGF), platelet-derived growth factor (PDGF-BB), and interleukin-17A (IL-17A) were induced (vascular reaction), while in APP-PSEN1 mice resistin, IL-17A and GM-CSF were mostly upregulated. The nasal drug absorption was similar in the brain and blood indicating the molecular bypass of the BBB. The learning and memory tests showed no difference in the cognitive performance of healthy aged and young animals. Based on these results, it can be concluded that various markers of chronic inflammation are present in healthy aged and diseased animals. In APOB-100 mice, a cerebro-ventricular dilation can also be observed. For development of proper anti-aging and neuroprotective compounds, further studies focusing on the above inflammatory targets are suggested.
INTRODUCTION
During the last years, several publications reported the effect of healthy aging on the permeability of the blood-brain barrier (BBB) (Erdő et al., 2017;Erdő and Krajcsi, 2019). Also, the efflux transporter downregulation has been documented in correlation with advanced age . The process of aging is in close connection with a kind of chronic inflammation and oxidative stress (Erdő et al., 2017). Similar observations have been published in chronic neurodegenerative disorders like Alzheimer's disease, vascular dementia, and atherosclerosis (Erdő et al., 2017). Apolipoprotein B-100 (APOB-100) is the main structural protein of the triglyceride-rich very-low-density and cholesterol-enriched intermediate-and low-density lipoprotein (LDL) particles. Therefore, the overexpression of APOB-100 protein in mice leads to elevated plasma triglyceride level even on normal chow diet (Bereczki et al., 2008;Lénárt et al., 2012). Several studies demonstrated that increased serum LDL and APOB-100 levels in Alzheimer's disease patients are associated with the pathological symptoms . Indeed, APOB-100 overexpressing mice show many signs of neurodegeneration, such as synaptic dysfunctions, tau hyperphosphorylation, amyloid plaque formation (in homozygous mice), apoptosis, or the enlargement of the third and lateral ventricles in the brain (Bereczki et al., 2008;Lénárt et al., 2012). The chronic hypertriglyceridemia due to high serum APOB-100 level may lead to the functional and morphological changes of the BBB, which have also been described in this model (Hoyk et al., 2018). Therefore, APOB-100 overexpressing mouse strain is a useful model to study the age-related cerebrovascular pathology and neurodegeneration induced by hyperlipidemia, as the symptoms develop after 7-8 months of age .
There are different transgenic models of Alzheimer's disease. Since mouse models containing only presenilin (PSEN) mutated genes show an increased proportion of amyloid-beta (Aβ)42 but do not exhibit amyloid plaques, bigenic lines have been developed by crossing transgenic mice overexpressing the mutant form of amyloid precursor protein (APP) and PSEN1. Typically, these bigenic mice display an earlier onset and a more rapid rate of pathogenesis than monogenic lines, in both terms of amyloid accumulation and cognitive impairment (Esquerda-Canals et al., 2017).
Alzheimer's disease pathophysiology entails chronic inflammation involving innate immune cells, namely, microglia, astrocytes, and other peripheral blood cells. Inflammatory mediators, such as cytokines and complements, are also linked to Alzheimer's pathogenesis. Despite increasing evidence supporting the association between abnormal inflammation and Alzheimer's disease, no well-established inflammatory biomarkers are currently available for the diagnosis. Since many reports have shown that abnormal chronic inflammation accompanies the disease, non-invasive and readily available peripheral inflammatory biomarkers should be considered as possible indicators for early diagnosis (Park et al., 2020). For human theranostics, mainly the peripheral plasma markers can be applied, but for determination of the most relevant factors and crucial mechanisms, the biomarkers characterized from brain homogenates in preclinical studies have also high importance.
It is widely accepted that atherosclerosis also involves chronic inflammation of blood vessel walls. Soeki and Sata (2016) reviewed the relationship between atherosclerosis and the dynamics of various inflammatory biomarkers, focusing on the development and progression of coronary artery diseases. The initial stages of atherosclerosis are often asymptomatic; however, when an atherosclerosis patient becomes symptomatic, his or her quality of life is significantly impaired. Therefore, early detection, diagnosis, and treatment of atherosclerosis is essential. Cytokines are a class of high molecular weight polypeptides that deliver cell signals in the context of immunological responses, inflammatory reactions, hematopoiesis, and other basic biological functions. For example, interleukin (IL)-6 and tumor necrosis factor (TNF)α, members of the inflammatory cytokine family released from vascular smooth muscle cells, endothelial cells, monocytes, macrophages, and so forth, have been shown to be deeply involved in atherosclerosis. The evidence that links inflammatory markers to disease and prevention of disease is much greater for some inflammatory markers than for others. These markers provide valuable tools to study disease progression and new prevention strategies. Their value in clinical practice is still being investigated.
As for the molecular background of physiological healthy aging, a low-grade chronic inflammation is described to be present also in normal conditions of aging. The nuclear factor (NF)-κB signaling pathway has been recognized as the most important key process underlying this inflammation. Several studies reported that age-related NF-κB signaling upregulates the expression of the proinflammatory genes, TNF-α/β, ILs (IL-1β, IL-2, and IL-6), chemokines [IL-8; regulated on activation, normal T cell expressed and secreted (RANTES)], and adhesion molecules (AMs) (Chung et al., 2011). Furthermore, NF-κBmediated upregulation of proinflammatory molecules, such as C-reactive protein (CRP), IL-6, and TNF-α, is closely associated with various age-related chronic pathophysiological conditions (Chung et al., 2006). The degree and kinetics of the upregulation correlate with the severity of the age-related clinical symptoms that can also be influenced by life style elements (like physical exercises, caloric restriction, and cognitive training).
In the current study, the models of healthy aging (in rats) and pathological aging (in APOB-100 and APP-PSEN1 mice) were studied to characterize the expression profile of inflammatory mediators in the brain, to analyze the nasal barrier permeability and function with aging, and also to study the possible morphological changes in the cerebral structures compared with healthy young or age-matched wild-type (WT) animals.
Animals
All animal experiments were performed in full compliance with the guidelines of the Association for Assessment and Accreditation of Laboratory Animal Care International's expectations for animal use, in the spirit of the license issued by the Directorate for the Safety of the Food Chain and Animal Health, Budapest and Pest County Agricultural Administrative Authority, Hungary. The animals were kept at 22 ± 3 • C and 50 ± 20% humidity animal room with a 12-h light/dark cycle and free access to food and water before and during the experiments.
Mice
The APOB-100 transgenic mouse strain overexpressing the human APOB-100 protein was previously established by the group of Miklós Sántha (Bjelik et al., 2006), while B6C3-Tg(APPswe/PS1dE9)85Dbo/Mmjax mice were purchased from The Jackson Laboratory (Bar Harbor, ME, United States). Both mouse strains were maintained on a C57BL/6 genetic background in a hemizygous form. Breeding of the transgenic mouse strains were approved by the regional Station for Animal Health and Food Control (Csongrád County, Hungary; project licenses: XVI./2724/2017 for APOB-100 mouse strain and XVI./1248/2017 for APP/PS1 mouse strain). To determine the genotype of hemizygous transgenic animals and WT littermates, DNA from tail biopsies of pups was purified, and the presence of the transgenes was detected by PCR, as described earlier (Bjelik et al., 2006;Tóth et al., 2013).
For cytokine array and magnetic resonance imaging (MRI) studies, 8-11 months old male transgenics and WT mice were used. For microdialysis experiments, male APOB-100 mice and male and female APP-PSEN1 mice were used at the age of 8-11 months.
Brain Homogenate Preparation
The rats were anesthetized [400 mg/kg chloral hydrate intraperitoneally (i.p.)] and decapitated, and the left striatum was quickly removed and weighed. Then, 1 ml of cold 1× cell lysis buffer, diluted from 2× cell lysis buffer for ELISA (EA-0001, Signosis Inc., Santa Clara, CA, United States) by Milli-Q water, was added to every 100 mg of tissue. The brain samples were then homogenized by tissue homogenizer (Ultra-Turrax TP 18/10; Staufen, Germany) on ice for a minute until the sample became entirely homogenous. The lysates were sonicated on ice for 30 s and then centrifuged at 10,000 rpm for 5 min at 4 • C. The supernatants were collected and divided into aliquots. The aliquots were handled as quickly as possible to reduce the risk of protein degradation. Finally, the samples were stored in a freezer at −80 • C until further analysis.
Protein Assay
Pierce TM BCA Protein Assay Kit (Thermo Fisher Scientific, Waltham, MA, United States) was used for protein determination. First, the albumin standards and the samples were prepared. The aliquots were diluted to one-tenth. The standards and the diluted samples were placed into a 96-well plate, and then the freshly mixed reagents were added. The plate was covered, gently shaken for 30 s, and placed into an incubator for 30 min at 37 • C. After it cooled down to room temperature, the plate was placed into the plate reader (Tecan Spark 20M; Männedorf, Switzerland), and the absorbance was measured at 562 nm.
ELISA in Rat Samples
A 96-well chemiluminescence ELISA array from Signosis, Inc. (Rat Cytokine ELISA Plate Array, Catalog Number: EA-4004; Santa Clara, CA, United States) was used to detect cytokines in the striatal samples of aged and young rats. The assay was performed according to the description of the manufacturer. The brain homogenate supernatants were diluted to 100 µg/ml of total protein calculated from protein assay results.
A 96-well white plate was divided into six sections, and the sections included three samples. Three sections were used for three young rats as control, while the other three were used to measure the cytokines in the brains of the aged rats ( Figure 1). In each section, the wells contained 16 specific cytokine capture antibodies. The cytokines in the test sample were sandwiched with first and second antibodies and visualized by avidin-biotin-horseradish peroxidase (HRP) binding using a luminescent substrate. The luminescence was detected by a Tecan Spark 20M (Männedorf, Switzerland) plate reader.
ELISA in Mouse Samples
For determination of cytokine expression in mouse brain homogenates, Signosis Mouse Cytokine ELISA Plate Array I (EA-4003; Signosis Inc., Santa Clara, CA, United States) was used. The assay was performed according to the instruction of the manufacturer. The brain homogenate supernatants were diluted to 1,000 µg/ml of total protein calculated from protein assay results. All together 24 different cytokines were determined, and the luminescence was compared with the WT mice [luminescence intensity ratio (LIR)]. APOB-100 mice and APP-PSEN1 mice were compared with the group of WT mice. For each strain, a pool of the left hemisphere (striatum) of five animals was used.
A 96-well white plate was divided into four sections, and the sections included two parallels for each sample. Two sections were used for the pool of five WT mice as control, while the other two were used to measure the cytokines in the brains of the pool of five transgenic mice (APOB-100 or APP-PSEN1, respectively). In each section, the wells contained 24 specific cytokine capture antibodies (Figure 2). The cytokines in the test sample were sandwiched with first and second antibodies and detected by avidin-biotin-HRP binding as a luminescent signal. The luminescence was detected similarly to rat assay.
Morris Water Maze Test in Rats (a Pilot Study)
Morris water navigation task is supposed to measure spatial memory in rodents. The young and aged rats were trained for 4 days to escape onto a hidden platform from each of the cardinal starting positions (north, west, south, east) in the maze. The platform was placed in the south-east quadrant of the pool. Extra-maze cues in the lab were used to facilitate the orientation of the animals. The rats completed three daily trials with an intertrial interval of 30 min. Escape latency and swimming path were recorded using Smart v3.0 video tracking system software (Panlab, Spain). Escape latencies of the two groups were compared and analyzed by repeated measures ANOVA.
Novel Object Recognition Test in Rats (a Pilot Study)
Novel Object Recognition (NOR) assay is a model for the investigation of recognition memory in rodents. The task procedure consists of two phases: Trial 1 (t 1 ): familiarization with two identical objects in the test box; Trial 2 (t 2 ): after 5 h intertrial delay, one of the familiar objects (O) was replaced by a novel object (N), and the exploration time of each object was measured for 3 min. The young and aged animals were observed through a video camera system. Recognition was characterized by the discrimination index (DI): (t 2novel −t 2fam )/(t 2novel +t 2fam ) × 100; the higher the DI, the better recognition memory. DIs in the two groups were compared and analyzed with independent samples t-test.
For Mice
T2-weighted anatomical scans were acquired on a 1T preclinical nanoScan MRI scanner (Mediso Ltd., Budapest, Hungary) equipped with 450 mT/m gradients and a diameter of 20 mm transmit/receive volume coil. During the imaging, mice were anesthetized with 1.5% isoflurane in medical oxygen and placed in prone position on the MRI bed. A three-dimensional FSEMS was acquired with the following parameters: TR = 2 s, effective TE = 75.8 ms, ETL = 16, number of excitations = 3, matrix size = 120 × 96 × 64, and FOV = 30 mm × 30 mm × 19.2 mm.
Semi-automatic segmentation was used in VivoQuant software (inviCRO) to delineate the ventricles. First, a rough region of interest (ROI) was drawn on the brain manually, and then ventricles were segmented from it by connecting thresholding algorithm with thresholds calculated by Otsu's method.
Surgery and Sample Collection
Animals were anesthetized with chloral hydrate (450 mg/kg IP). The right jugular vein was exposed, and the MAB1.4.3 microdialysis probe was inserted into the vein. After checking the flow through the peripheral probe, the tubings of the probe were exteriorized under the scapulae. Then, the animals were placed in a Stoelting stereotaxic instrument, and the brain probe (MAB8.4.3) was implanted into the left striatum using the following coordinates with respect to the bregma: anteriorposterior (AP), +0.2 mm; from midline (ML), −2.2 mm; and dorso-ventral (DV), −3.2 mm. Microdialysis probes were connected to a CMA/102 microdialysis pump and perfused with artificial cerebrospinal fluid (aCSF, brain probe) or artificial peripheral perfusion fluid (aPPF, peripheral probe) at a flow rate of 1.0 µl/min. After a 30-min equilibration period, the animals were treated with intranasal (IN) QND to the left nostril, and then the sample collection was continued for 3 h. The microdialysate samples were collected every 30 min and placed on dry ice immediately. The frozen samples were stored at −80 • C in a freezer until transferring them to the bioanalytical laboratory.
Bioanalysis of Quinidine in Dialysate Samples
Identification of the quinidine (QND) concentration in the dialysate samples was performed on a Sciex 6500 QTrap hybrid tandem mass spectrometer coupled to an Agilent 1100 HPLC system. Electrospray ionization was used in positive ion detection mode with MRM transitions of 325.2/307.2 (quantifier) and 325.2/172 (qualifier) with a collision energy of 31 and 45 V, respectively. The dwell time of the transitions was 300 ms. Source conditions were: curtain gas: 45 arbitrary unit (au), spray voltage: 5,000 V, source temperature: 450 • C, nebulizer gas: 40 au, drying gas: 40 au, and declustering potential: 171 V. The samples were introduced to the system via an HPLC system consisting of a
RESULTS
For proper preparation of striatal samples for cytokine ELISA array, a sufficient dilution of the supernatants was necessary. To make these solutions, first the total protein levels should have been determined. The results of protein assay for rats are shown in Table 1, and for mice, they are presented in Table 2.
Total Protein Levels in the Rat Striatum
For determination of total protein level in the striatal supernatant, three old and three young rats were used. The results are shown in Table 1. The total protein values were higher in the young subjects than in the aged group. For cytokine ELISA plate array, the lysates were diluted to 100 µg/ml protein concentration.
In the case of mouse experiments for preparation of brain homogenates, a pooled sample of the left hemispheres of five animals/group was used. Otherwise, the total protein levels were determined similarly to rat experiments. The results are shown in Table 2. Afterward, the lysates were also diluted to reach 100 µg/ml protein concentration for cytokine assay.
Cytokine Levels in Healthy Aged and Young Rats
Cytokine profiling was performed for rat striatums by analyzing the expression of 16 cytokines (Figures 3A,B) and for mouse pooled hemispheres for 24 cytokines (Figures 4A,B).
The LIRs and the function of each cytokine of aged compared with young rats are shown in Table 3.
Cytokine Levels in WT and Transgenic (APOB-100 and APP-PSEN1) Mice
As it is known from the literature, APOB-100 transgenic mice are wildly accepted models of neurodegeneration with vascular origin (Bereczki et al., 2008;Hoyk et al., 2018). In this cytokine assay, vascular endothelial growth factor (VEGF) expression showed the highest increase, more than 13-fold upregulation than the control, in APOB-100 mice. VEGF is a signal protein produced by the cells stimulating blood vessel formation, and it induces angiogenesis in the brain. The second factor was platelet-derived growth factor (PDGF-BB) that also showed a dramatic upregulation (more than 11fold than the WT) by the genetic modification of APOB-100 gene. Interleukin-17A (IL-17A), a proinflammatory cytokine produced by activated T cells, was also significantly increased by the genetic modification of APOB-100. This cytokine regulates the activity of NF-κB, and mitogen-activated protein kinase (MAPK) can stimulate cyclooxygenase-2 (COX-2) and enhance nitric oxide (NO) production. On the contrary, the expression of some other inflammatory factors like IL-1α, IL-1β, IL-2, and IL-4 was unchanged or downregulated in APO-B100 mice. These cytokines have a crucial role in inflammation and cellular immunity.
The double-humanized mouse model of Alzheimer's disease (APP-PSEN1 mice) was also compared with littermate WT mice. The mostly upregulated cytokines in the brain were resistin (9.59-fold increase), IL-17A (6.31-fold increase), and granulocyte-macrophage colony-stimulating factor (GM-CSF) (5.47-fold increase). Resistin is secreted by adipose tissue and was shown to cause high level of bad cholesterol (LDL). Resistin accelerates the accumulation of LDL in the arteries, increasing the risk of vascular diseases. IL-17A is a proinflammatory protein, and GM-CSF regulates the macrophage number and function. It is a product of cells activated by inflammation and pathologic conditions.
The LIRs as a marker of relative expression levels of the 24 cytokines tested for APOB-100 and APP-PSEN1 mice are presented in Table 4. The comparative bar graphs are shown in Figures 4A,B.
Morris Water Maze Test in Rats
Both groups successfully learned the Morris test as shown by the significant decreases in the latency to find the hidden platform across 4 days (Figure 5). There was no significant difference between the performance of the two groups.
NOR Test in Rats
Discrimination index: DI = (N−O)/(N+O) Both old and young rats explored the novel and the familiar object on average about for the same time showing no sign of recognition memory (Figure 6). Three of the old rats were freezing in the test box during the majority of the t 2 test period. Young rats explored the objects (novel + familiar) almost three times longer than old rats did; the difference just missed the 5% statistical significance.
Brain MRI of Aged Rats
The results of MR imaging in old rats are presented in Table 5 and Figure 7. There were no significant morphological changes in the brains with advanced age.
Brain MRI of WT and Transgenic Mice
Magnetic resonance imaging was acquired on three groups of mice: WT group of four male mice (301.8 ± 44 days old), APOB-100 group of four male mice (345.5 ± 5.5 days old), and APP-PSEN1 group of two male mice (398.5 ± 0.5 days old) ( Table 6). The volumes of the segmented ventricles were determined for each mouse, and group means and standard deviations were calculated (Figure 8). APOB-100 mice had significantly enlarged Yellow: more than 5-fold increase. Blue: less than 0.5-fold decrease in the luminescence intensity ratio compared with the littermate WT mice.
Brain Penetration of IN P-Glycoprotein Substrate QND in WT and Transgenic Mice
It is known from the literature that the IN delivery route of drug administration is able to bypass the BBB (Erdő et al., 2018). The nasally administered drugs can penetrate the brain via olfactory or trigeminal pathways (Erdő et al., 2018), and the molecules may cross the nasal mucosa also paracellularly (where the tight junctions are missing) or by the sensory neuronal endocytosis and reach the central nervous system (as the first intrusion places: the bulbus olfactorius and the brainstem) in a direct way. Then, the compounds are distributed in the entire brain parenchyma.
In case of systemic administration, the brain penetration of this molecule is restricted by the P-gp efflux pump (Sziráki et al., 2011, FIGURE 5 | Learning and memory performance of young and aged Wistar rats in Morris water maze test. N = 4/group.
2013).
Only approximately 30% of the blood level is reached in the brain after intravenous or intraperitoneal treatment (Sziráki et al., 2011(Sziráki et al., , 2013 in rodents. In the current experiments, QND is applied in a gelous vehicle, which ensures a continuous drug release and absorption during the observation period (3 h) (Figure 9). A previous experiment provided evidence that IN delivery in gel formulation has several advantages contrary to nasal solutions (Bors L.A. et al., 2020). In WT, APOB-100, and APP-PSEN1 transgenic mice, the nasal absorption pattern seems to be similar in our microdialysis experiments. After a rapid absorption peak, a long-lasting plateau phase is coming in the concentration-time profiles (Figures 9A-C). The C max value and also the AUC value are the highest in the WT mice (Figures 9D,E), while the AUC brain /AUC blood ratio is similar in all the three strains. The brain exposure is higher than the blood concentration in FIGURE 7 | Four consecutive coronal MRI sections of the brain of three aged male Wistar rats [after , Corrigendum]. The age and body weights are shown in Table 5. all the three groups of mice, suggesting no significant role of capillary endothelial efflux pumps in the nasal mucosa in the drug absorption. Based on these dual-probe microdialysis results, it can be concluded that there is no remarkable difference in the nasal barrier function between the WT and diseased mice. Only a transiently higher brain uptake of QND can be seen in the early phase (0.5-1.0 h) after nasal exposure in the case of healthy animals compared with the APP-PSEN1 and APOB-100 mice.
CONCLUSION AND DISCUSSION
This study aimed to analyze the process of healthy aging and also the age-related neurodegenerative diseases (Alzheimer's disease, atherosclerosis) in rodents. The healthy aging was investigated in 14-21-month old rats, while the neurodegenerative processes were studied in APP-PSEN1 and APOB-100 transgenic mice in the age of 9-13 months when the disease symptoms have already been developed (Bjelik et al., 2006;Bereczki et al., 2008;Hoyk et al., 2018). The study focused on three main areas: the cerebral cytokine expression compared with healthy individuals; the anatomical and morphological changes in the brain using MRI compared with controls; and the examination of nasal barrier permeability for a P-gp substrate by in vivo dual-probe microdialysis. Also, the behavioral status was evaluated in rats using two memory and learning assays: Morris water maze test and NOR test.
Based on the results, it can be concluded that in normal aged rats, the role of the hematopoietic cytokine (SCF) is highly increased that may lead to the enhanced survival and differentiation of monocytic cells. Also, MCP-1 and Rantes are upregulated with advanced age, which processes are responsible for the growing cell migration and infiltration of monocytes/macrophages and chemotactic for T cells, eosinophils, and basophil leukocytes. All these processes lead to a chronic local inflammation and immune response in the brain in the old subjects. The immune-assay in mice showed also upregulated inflammatory markers. Enhanced VEGF and PDGF-BB levels were detected in APOB-100 transgenic mice. On the other hand, an early study revealed significantly lower microvascular density in the brain of APOB-100 transgenic animals than of WTs, suggesting a defective VEGF signaling (Süle et al., 2009). Indeed, it was demonstrated in APOE−/− mice that hyperlipidemia hindered VEGF-induced angiogenesis (Zechariah et al., 2013). Moreover, highly increased level of VEGF may induce the disruption of the BBB (Lange et al., 2016), which is in line with a previous result (Hoyk et al., 2018). In APP-PSEN1 mice, the increased level of resistin and GM-CSF was observed indicating the enhanced level of LDL and stimulated macrophage function. Previously, a significantly increased serum resistin level was found in human patients with Alzheimer's disease (Demirci et al., 2017). Although resistin is secreted mainly by the adipocytes, it has been detected in other tissues and in the cerebrospinal fluid as well (Badoer et al., 2015). Moreover, the in situ production of resistin was proven in mouse brain (Morash et al., 2002). The proinflammatory cytokine, IL-17A, was upregulated in both diseased mouse strains, suggesting an overproduction of COX-2, IL-6, and NO. This cytokine also regulates NF-κB and MAPKs and is a marker of T cell activation.
Intranasal administration is a promising strategy to bypass BBB and deliver CNS dugs directly to the brain. However, the efficacy of this process can be influenced by different efflux transporters, such as P-gp. Altered function and expression of P-gp have been found in Alzheimer's disease patients (van Assema et al., 2012) and in APOB-100 transgenic mice as well (Hoyk et al., 2018). Accordingly, investigating nasal barrier permeability in disease model animals is of primary importance for the future development of possible therapeutic methods. In nasal barrier studies, QND, a reference probe-substrate of P-gp was used for characterization of barrier permeability (Sziráki et al., 2011(Sziráki et al., , 2013. In BBB, P-gp is the major efflux transporter responsible for the protection of the brain from xenobiotics. In the nasal cavity, there is a direct pathway of the molecules to be absorbed to the brain through the nasal mucosa bypassing the BBB. In the current experiment, the P-gp substrate was administered as a nasal gel formulation, and its penetration was monitored in the brain and in the periphery. In both diseased mouse strains and also the WT mice after a rapid absorption, a long-lasting continuous release and penetration of QND have been observed. These results indicate an unchanged nasal barrier function in the transgenic mouse models compared with WT and provide evidence that there is no remarkable role of BBB in drug absorption through the nose-to-brain axis in mice. On the contrary, a previous study described the role of peripheral P-gp transporters in the modulation of nasal drug to brain penetration in healthy rats (Bors L.A. et al., 2020).
It was hypothesized that the extensive neuronal death observed in APOB-100 and APP-PSEN1 mice should affect brain morphology. This was investigated using MRI in both strains and compared with the controls. Remarkable enlargement in the cavity size of the lateral and dorsal ventricles and a moderate increase in the aqueduct (fourth ventricle) size were detected in the brain of APOB-100 mice. On the other hand, no significant dilation of the ventricles was detected in the case of APP-PSEN1 mice compared with the WT. For APOB-100 mice, the dilation of the ventricles can be the consequence of enhanced production, infiltration, or defected drainage of the cerebrospinal fluid. The downregulation of the cerebral glymphatic in this strain with advanced age and the reduced function of the mitochondria (Bereczki et al., 2008) can also contribute to the lower pumping and secretory function of the ependyma cells in the choroid plexus.
In the learning and memory assays, only a low number of young and aged rats has been tested. This pilot study provides just a preliminary result, namely, that there was no significant difference in the cognitive performance of normal aged and young rats indicating just a low non-symptomatic cerebral function loss with healthy aging. The diminished spatial memory function has already been described in earlier studies (Harrison et al., 2009;Kennard and Harrison, 2014) for APP-PSEN1 mice, which is a well-characterized model of Alzheimer's disease.
Based on our cytokine array results, SCF, MCP-1, and RANTES can be further studied whether they can serve as biomarkers of aging measured from the plasma. Also, VEGF, PDGF-BB, and IL-17A can be proposed as markers of hypertriglyceridemia with brain dysfunction, and GM-CSF, IL-17A, and resistin can be indicators of Alzheimer's-like neurodegeneration. To study the kinetics of the overexpression of these proteins during the progression of the pathology and the detectability from peripheral samples, further longitudinal experiments are needed.
In conclusion, the current study revealed the cerebral upregulation of VEGF, PDGF-BB, and IL-17A cytokines in APOB-100 mice and resistin, GM-CSF, and IL-17A induction in APP-PSEN1 transgenic mice, which indicates the possible role of these proteins in the Alzheimer's-like pathology. The lack of BBB function in the nasal drug absorption in transgenic mice and also the unchanged cognitive status with healthy aging in normal rats have been shown. The brain imaging by MRI confirmed the previous data on enlarged cerebral ventricles in the APOB-100 mice (which can be the consequence of damaged energy metabolism and ependymal dysfunction) and lack of morphological abnormalities in APP-PSEN1 mice. Further studies are needed to analyze the effect of possible therapeutic interventions on the inflammatory balance shift that accompanies physiological and pathological aging processes. , and AUC brain /AUC blood (F) of wild-type and transgenic mice generated from brain and blood concentration-time curves of microdialysis results.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by the Directorate for the Safety of the Food Chain and Animal | 7,484.4 | 2021-07-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Tuning of the elastic modulus of a soft polythiophene through molecular doping †
Molecular doping of a polythiophene with oligoethylene glycol side chains is found to strongly modulate not only the electrical but also the mechanical properties of the polymer. An oxidation level of up to 18% results in an electrical conductivity of more than 52 S cm (cid:2) 1 and at the same time significantly enhances the elastic modulus from 8 to more than 200 MPa and toughness from 0.5 to 5.1 MJ m (cid:2) 3 . These changes arise because molecular doping strongly influences the glass transition temperature T g and the degree of p -stacking of the polymer, as indicated by both X-ray diffraction and molecular dynamics simulations. Surprisingly, a comparison of doped materials containing mono-or dianions reveals that – for a comparable oxidation level – the presence of multivalent counterions has little effect on the stiffness. Evidently, molecular doping is a powerful tool that can be used for the design of mechanically robust conducting materials, which may find use within the field of flexible and stretchable electronics.
Introduction
Conjugated polymers receive considerable attention for numerous applications from wearable electronics to soft robotics that require well-adjusted mechanical properties.2][3][4] Moreover, conjugated polymers can be blended with insulating polymers or be modified through the addition of additives that act as crosslinkers or have a plasticizing effect. 2olecular dopants are additives that are widely used to modulate the electrical properties of conjugated polymers.Most conjugated polymers are relatively stiff and feature a high elastic modulus of several 100 MPa to several GPa at room temperature [3][4][5][6] due to a high glass transition temperature T g and/or a high degree of crystalline order.8][9] As a result, doping is typically not considered as a tool that allows to adjust the elastic modulus of conjugated polymers.
To compare the doping-induced changes in elastic modulus that have been observed for different polymers we here define a figure of merit Z = log(E doped /E neat ), which considers the ratio of the elastic modulus of the doped material E doped and the neat polymer E neat .There are only few studies that investigate how the elastic modulus of conjugated polymers changes with molecular doping and values are limited to Z r 0.9 for unaligned material (Table 1).A comparison of the few existing reports allows us to predict what type of changes in stiffness can be expected upon doping, as discussed in more detail below: doping of stiff conjugated polymers can lead to a slight decrease in modulus while doping of soft materials tends to increase the modulus.
For stiff conjugated polymers the impact of doping on the mechanical properties appears to be dominated by a plasticization type effect.For example, poly(3-hexylthiophene) (P3HT) with a regioregularity of more than 97% and T g E 23 1C, was found to have a modulus of about 340 MPa at room temperature, which slightly decreased to 270 MPa upon sequential doping with 9 mol% Mo(tfd-COCF 3 ) 3 . 7It is feasible that the ingression of the dopant reduced the direct interactions between adjacent polymer chains, resulting in a slight reduction in stiffness but similar T g E 21 1C.A similar plasticization type impact of doping has been observed in the case of P3HT (regioregularity = 95%) doped with 9 mol% of a latent dopant based on ethylbenzene sulfonic acid (EBSA) caped with a 2-nitrobenzyl moiety that is released upon heating, which resulted in a decrease in T g from 30 to 15 1C and modulus from 900 to 345 MPa, i.e. a low figure of merit of Z E À0.4 (Table 1). 8urther, a diketopyrrolopyrrole (DPP) based copolymer (E = 374 MPa) 12 displayed a reduction in T g from 55 to 27 1C upon doping with 1 wt% of 2,3,5,6-tetrafluoro-7,7,8,8-tetracyanoquinodimethane (F4TCNQ) (see Fig. 1 for chemical structure), resulting in a more stretchable material as evidenced by a higher crack onset strain. 13A reduction in stiffness upon molecular doping has also been reported for stretch-aligned polymer films or fibers composed of polyacetylene, 5 poly(2,5-dimethoxy-pphenylenevinylene), 14 poly(2,5-thienylene vinylene) 15 and P3HT. 7olecular doping of conjugated polymers with a lower stiffness can have an adverse effect on the elastic modulus (Table 1).Poly(3-octylthiophene) (P3OT) and poly(3-dodecylthiophene) (P3DDT) prepared by oxidative polymerization feature a low regio-regularity of only 75% and hence a low modulus of 60 and 50 MPa at room temperature, 9,11 presumably due to a lower crystallinity.Moulton and Smith have argued that molecular doping of these relatively soft materials leads to an increase in p-stacking, which results in a considerably higher elastic modulus. 9As a result, the modulus of unaligned P3OT was found to increase 8-fold to 470 MPa upon doping with 18 mol% FeCl 3 , resulting in the highest reported figure of merit of Z E 0.9. 11herefore, it can be anticipated that soft conjugated polymers display a more substantial change in modulus upon doping, which may allow to use molecular doping as a tool to not only modulate the electrical but also the mechanical properties.
To explore this hypothesis, we set out to study the impact of molecular doping on the mechanical properties of a soft conjugated polymer.We chose to focus on a polythiophene with tetraethylene glycol side chains, p(g 4 2T-T) (see Fig. 1 for chemical structure), which belongs to a class of polar conjugated polymers that currently receive widespread attention for a myriad of applications from bioelectronics 16,17 to thermoelectrics 18,19 and energy storage. 20,21p(g 4 2T-T) is very soft due to a low crystallinity and T g E À46 1C, 22 resulting in a low shear storage modulus of only 8 MPa, as we will show in this paper, and therefore doping can be expected to lead to a considerable increase in stiffness (cf.discussion above).Further, the polymer has a low ionization energy of IE 0 E 4.7 eV 22 and hence can be doped with both F4TCNQ (electron affinity EA 0 E 5.2 eV) as well as the anion of F4TCNQ (EA À E 4.7 eV). 23The presence of F4TCNQ dianions opens up the possibility to study the impact of multivalent counterions on the mechanical properties, which has been proposed to lead to ionic type crosslinking when phytic acid 24 or MgSO 4 25 are added to the conjugated polymer-based material.We find that doping leads to enhanced p-stacking as well as an increase in T g .The presence of mono-or dianions, however, which can be readily created through doping with F4TCNQ, are found to have no impact on the modulus, while monoanions improve the ductility and toughness of the material.The electrical and mechanical properties are found to correlate with the oxidation level.An electrical conductivity of up to 52 S cm À1 upon doping with F4TCNQ is accompanied by a 29-fold change in elastic modulus from 8 to 232 MPa, yielding a figure of merit of Z E 1.5.An even higher increase to 377 MPa is observed when the dopant 2,5-difluoro-7,7,8,8-tetracyanoquinodimethane (F2TCNQ) is used, which yields a value of Z E 1.7.
Results and discussion
In a first set of experiments, we compared the thermomechanical properties of neat and strongly doped p(g 4 2T-T).Doping was achieved by processing the polymer and dopant F4TCNQ from the same solution, a 2 : 1 mixture of chloroform (CHCl 3 ) and acetonitrile (AcN), which was drop cast at 40 1C to obtain 30 to 80 mm thick films (see Experimental for details).
The doped material had a uniform appearance, which is in stark contrast to the granular texture of bulk samples of P3HT co-processed with F4TCNQ. 8eat p(g 4 2T-T) was characterized with oscillatory shear rheometry at 0.16 Hz because the polymer is soft and yields at low strains, which prevented us from characterizing freestanding samples over a wide range of temperatures.The shear storage modulus G 0 decreases from a value of about 10 9 Pa at À80 1C to 10 8 Pa at À40 1C; storage moduli for glassy polymers are around 1 GPa. 4 Thus, we assign this drop in storage modulus to the onset of main-chain relaxation, possibly accompanied by relaxation of part of the side chains.The shear loss modulus G 00 shows a peak at À62 1C with a broad shoulder at higher temperatures (Fig. 2a).We here assign the peak in G 00 to the T g .We also determined the T g with differential scanning calorimetry (DSC) using a cooling rate q = À10 1C min À1 (Fig. S1, ESI †) and with dynamic mechanical analysis (DMA) using the glass fiber mesh method and a higher frequency of 1 Hz (Fig. S2a, ESI † and Table 2), which yielded values of T g E À59 1C and À46 1C, respectively.Fast scanning calorimetry (FSC) was used to study the influence of the cooling rate q, ranging from À0.1 to À1000 K s À1 , on the fictive temperature (equivalent to T g for q = À0.17K s À1 ).The dependence of the fictive temperature on q could be described with the Williams-Landel-Ferry (WLF) equation (see Fig. S1, ESI †), which is consistent with an a-relaxation process, i.e. the main-chain relaxation.We would like to point out that relaxation of the side chains is likely frozen in at significantly lower temperatures as reported for polymethacrylates with oligoethylene glycol side chains, which feature a b-relaxation temperature below À100 1C. 26 To rule out that the chain length of p(g 4 2T-T) strongly influences the T g we also studied a low-molecular weight fraction collected through fractionation of the as-synthesized polymer with acetone.DMA of the acetone fraction of p(g 4 2T-T) revealed a T g E À51 1C, which is only marginally lower than the T g E À46 1C of p(g 4 2T-T) with M n E 24 kg mol À1 (Fig. S2, ESI †).This journal is © The Royal Society of Chemistry 2022 We therefore conclude that the chain length does not strongly influence the T g of p(g 4 2T-T) for the studied range of molecular weights.
Co-processing of p(g 4 2T-T) with 20 mol% F4TCNQ resulted in a stiff solid and hence we chose to characterize the doped material with DMA in tensile mode at 1 Hz.The tensile storage modulus E 0 has a very high value of 8.4 Â 10 9 Pa in the glassy state at À80 1C and gradually drops to 1.4 Â 10 9 Pa at 20 1C, which is a more than 40-fold increase compared to the neat polymer (Z E 1.6), for which we measured a tensile storage modulus of only 34 Â 10 6 Pa at 20 1C and 1 Hz (Fig. 2b).The value measured for the neat polymer is in agreement with the shear storage modulus at 20 1C when assuming a Poisson's ratio of u = 0.5 so that E 0 = 2(1 + n) Â G 0 = 3G 0 .The tensile loss modulus E 00 of p(g 4 2T-T) doped with 20 mol% F4TCNQ features a prominent peak at 3 1C, which we assign to the T g (Table 2).
We carried out transmission wide-angle X-ray scattering (WAXS) to compare the crystalline order of neat and doped p(g 4 2T-T) bulk samples.The WAXS diffractogram of neat p(g 4 2T-T) features distinct h00 diffraction peaks (h = 1-3; q 100 = 0.36 Å À1 ) due to lamellar stacking (Fig. 3a).Instead of a p-stacking peak there is a broad amorphous halo at q = 1.6 Å À1 , which indicates that the backbones of the polymer are disordered.The WAXS diffractogram of p(g 4 2T-T) co-processed with 20 mol% F4TCNQ is remarkably different.The h00 diffraction peaks are now situated at a lower scattering vector (h = 1-2; q 100 = 0.30 Å À1 ), which is commonly observed for polythiophenes doped with F4TCNQ and arises because the dopant is located in the side-chain layers and hence the lattice expands along the side-chain direction. 27Furthermore, a prominent peak can now be discerned at q 010 = 1.84 Å À1 (Fig. 3a), which we assign to p-stacking of the p(g 4 2T-T) backbone.
The doping process can strongly influence the nanostructure of conjugated polymers. 28,29To separate the impact of doping and processing (e.g. through a change in the solubility of the polymer upon doping) we also vapor-doped thin films of p(g 4 2T-T) with F4TCNQ, which we analyzed with grazing-incidence wide-angle X-ray scattering (GIWAXS).A diffractogram produced by radially integrating a GIWAXS pattern of neat p(g 4 2T-T) over all azimuthal angles is comparable to transmission WAXS measurements on bulk samples, with distinct h00 diffraction peaks (h = 1-3; q 100 = 0.37 Å À1 ) and a broad halo at q = 1.6 Å À1 (Fig. 3b).Vapor doping with F4TCNQ results in a shift in h00 diffraction peaks to lower scattering vectors (h = 1-4; q 100 = 0.29 Å À1 ; Fig. 3b), which retain their preferential out-of-plane orientation (Fig. S3, ESI †).In addition, two in-plane diffraction peaks emerge at 1.74 Å À1 and 1.8 Å À1 (Fig. 3b and Fig. S3b, ESI †), which we assign to two distinct p-stacking motives.Evidently, vapor-doping of p(g 4 2T-T) significantly alters the nanostructure of the polymer, which suggests that the observed structural changes are indeed a result of molecular doping and not merely related to changes in processing conditions.The increase in p-stacking upon doping is consistent with the observed increase in T g and E 0 (see Table 1 and Fig. 2).The large number of crystallites that have developed hinder main-chain relaxation of the remaining amorphous fraction, for which the higher T g is observed, and at the same time lead to reinforcement of the material, especially at T 4 T g .
Molecular dynamics (MD) simulations allowed us to gain detailed insight into the structural changes that occur as a result of molecular doping.A computational box was filled with oligomers (Fig. 4b).For neutral oligomers g t-t (r) is featureless, which is consistent with the high degree of disorder of the polymer backbones inferred from X-ray diffractograms (cf.Fig. 3).In contrast, for the case of oligomers with +1 and +2 charges (O ox E 8.3 and 16.7%), g t-t (r) exhibits a pronounced peak at about 4 Å, which arises due to p-stacking of neighboring chains.With further increase of the doping level to +4 charges (O ox E 33.3%) the oligomers are unable to p-stack, as evidenced by the absence of the peak in g t-t (r).Note that the presence of p-stacking at intermediate doping levels (O ox E 8.3 and 16.7%) and its absence for the neat and highly doped oligomers (O ox E 0 and 33.3%) can also be seen in the MD simulation snapshots (Fig. 4a and Fig. S4b, ESI †).The MD simulations are consistent with our X-ray analysis (Fig. 3), which showed that doped p(g 4 2T-T) forms p-stacks.
The observed trend in the evolution of p-stacking with the doping level can be understood as follows: for O ox E 8.3 to 16.7% the counterions help to bring oligomer chains together, which promotes p-stacking and increases planarity.Note that planarity is also increased because of the change of the character of the bond alternation in the thiophene rings from aromatic to quinoid with the increase of the oxidation level (see Fig. S5, ESI †).In addition, p-stacking enables polarons to delocalize across adjacent chains, which according to previous reports promotes the pronounced p-stacking that occurs when doping regio-random P3HT with F4TCNQ. 30,31With a further increase of the doping level to O ox E 33.3%, Coulomb repulsion between adjacent chains becomes dominant and the excess F4TCNQ is disrupting the microstructure of the film, which prevents p-stacking.The theoretical oxidation level of O ox E 16.7% corresponds to p(g 4 2T-T) doped with 20 mol% F4TCNQ, which has an O ox E 16.8% (Table S2, ESI †).
We also calculated the radial distribution function g t-b (r) of the distance r between the center of mass of thiophene rings and the center of mass of the benzene ring of F4TCNQ anions (Fig. 4c).For all studied doping levels, we observe a sharp onset in g t-b (r) around 3.5 Å, which is comparable to a donor-acceptor distance of 3 to 5 Å predicted by Spano et al. for P3HT and F4TCNQ. 32,33We also carried out MD simulations where we mimicked tensile deformation of the neat and doped material, This journal is © The Royal Society of Chemistry 2022 using a strain rate of 10 9 s À1 , which yields a Young's modulus of almost 4 GPa with only a minor dependence on O ox ranging from 0 to 33.3% (Fig. S6, ESI †).This value is comparable to the storage modulus of 5-9 GPa determined with DMA below À20 1C for p(g 4 2T-T) doped with 20 mol% F4TCNQ (O ox E 16.8%; see Fig. 2).
In a further set of experiments, we studied the impact of the charge of the counterion on the mechanical properties.Each F4TCNQ molecule can undergo two electron transfer processes with polymers that have an IE 0 r 4.7 eV, resulting in the formation of F4TCNQ dianions with a charge of À2. 23 Dianion formation is most pronounced for low dopant concentrations of 3 and 6 mol% F4TCNQ, as evidenced by a distinct FTIR absorption peak at W CN = 2131 cm À1 (Fig. 5a and Fig. S7, ESI †).We estimated the oxidation level using FTIR absorption spectra recorded for spin-coated films of p(g 4 2T-T) co-processed with the dopant (Fig. S7 and S8, ESI †).The anion and dianion of F4TCNQ give rise to distinct absorption peaks at W CN that correspond to the cyano stretch vibration.We assumed that at low oxidation levels each dopant molecule undergoes an electron transfer with the polymer and compared the relative intensity of the W CN absorption peaks with corresponding FTIR signals recorded for solutions of the lithium and dilithium salt of F4TCNQ. 23A dopant concentration of 3 mol% F4TCNQ gives rise to an ionization efficiency of Z ion E 187%, i.e. most dopant molecules generate two polarons, and hence O ox E 5.7% (Table S2, ESI †).We also included samples doped with F2TCNQ (EA 0 E 5.1 eV), which can only undergo one electron transfer process with p(g 4 2T-T) per dopant molecule due to a higher EA 0 E 5.1 eV and EA À E 4.5 eV (cf.Fig. 5a).For a dopant concentration of 6 mol% F2TCNQ we estimate O ox E 6.4%, assuming that each dopant undergoes one electron transfer with the polymer, i.e.Z ion E 100% (Table S2, ESI †).As a result, we are able to carry out a direct comparison of the mechanical properties of doped p(g 4 2T-T) with a similar oxidation level but compensated with counterions that have charge À1 (F2TCNQ anions) or À2 (F4TCNQ dianions).We used tensile deformation of free-standing samples at room temperature to analyze the mechanical properties of p(g 4 2T-T).For low oxidation levels the low stiffness made it challenging to both mount samples in our DMA instrument and to ensure their integrity over a wide range of temperatures (see Methods for details).Tensile deformation yielded a comparable Young's modulus of E doped E (31 AE 2) MPa and (24 AE 4) MPa (Fig. 5b and Table S2, ESI †), which indicates that the charge of the counterion does not influence the stiffness of the doped polymer.WAXS diffractograms recorded for these samples feature a clear p-stacking peak at q 010 E 1.84 Å À1 (Fig. S9, ESI †).Moreover, MD simulations of oligomers with charge +1 (O ox E 8.3%) but neutralized with either F4TCNQ anions or dianions yield a comparable radial distribution function between the center of mass of thiophene rings of different oligomers with a distinct peak in g t-t (r) at 4 Å (Fig. 5c, d; note that for the MD simulations we used the same dopant, i.e.F4TCNQ).Doping with F2TCNQ and F4TCNQ appears to enhance the order of the polymer to a similar degree, which suggests that the observed increase in Young's modulus can be explained by changes in the conformation of the polymer and p-stacking.We therefore conclude that the presence of dianions does not lead to ionic type crosslinking of p(g 4 2T-T) in the solid state since the stiffness of the polymer is not affected by the charge of the counterions.However, p(g 4 2T-T) doped with F2TCNQ displays a significantly larger strain at break of e b E (50 AE 10)% as compared to F4TCNQ doped material with e b E (30 AE 5)% (Table S2, ESI †).It appears that the presence of more numerous monoanions instead of dianions has a positive impact on the toughness with values of about 0.8 MJ m À3 and 0.5 MJ m À3 in case of p(g 4 2T-T) doped with 6 mol% F2TCNQ and 3 mol% F4TCNQ, respectively.
In a further set of experiments, we compared the impact of the oxidation level on both the mechanical and electrical properties of doped p(g 4 2T-T).We used tensile deformation at room temperature because we were able to carry out this measurement for a wide range of O ox from 0 to 18.2% (see Methods for details).UV-vis-IR spectra confirm the high oxidation level of the here studied samples doped with F4TCNQ or F2TCNQ, as evidenced by the disappearance of the neat polymer absorption with increasing O ox and the emergence of strong polaronic absorption peaks in the infrared part of the spectrum (Fig. S7 and S8, ESI †). 34he neat, undoped polymer features a low Young's modulus of E neat E (8 AE 2) MPa, which is three times lower than the value inferred from oscillatory shear rheometry (Table 2), likely due to the low employed tensile deformation rate of 5 mN min À1 .The Young's modulus increases with O ox , first gradually to E doped E (24 AE 4) MPa at O ox E 5.7%, and then more strongly reaching a value of E doped E (232 AE 16) MPa at O ox E 18.2% (Fig. 6a and b), which yields a figure of merit Z E 1.5 (cf.Table 1).The toughness shows minimal increase for O ox o 10% but then increases rapidly to 5.2 MJ m 3 at O ox E 18.2% (Fig. S10c, ESI †).The electrical conductivity displays a similar trend as the Young's modulus with O ox and reaches a value of s E (52 AE 3) S cm À1 for O ox E 18.2% (Fig. 6b).Doping with F2TCNQ results in a comparable trend even though O ox only reaches 13.5% (estimated by comparing the intensity of the W CN absorption peak for different amounts of dopant; Fig. S8, ESI †), yielding a lower conductivity of s E (20 AE 3) S cm À1 but, strikingly, a higher Young's modulus of E doped E (377 AE 85) MPa and hence Z E 1.7 (Fig. S10, ESI †).The close to linear correlation between s and E doped (Fig. 6c) is akin to the interplay of electrical and mechanical properties that has been observed for uniaxially aligned conjugated polymer tapes and fibers. 9,35,36Transmission WAXS diffractograms reveal that the intensity of the q 010 diffraction due to p-stacking increases with O ox (Fig. S9, ESI †).Since p-stacking aids hopping of charges between neighboring polymer chains as well as the transmission of mechanical force, s and E doped increase in tandem with O ox .
Finally, we explored if an increase in stiffness can also be achieved with dopants other than F4TCNQ and F2TCNQ.We therefore doped p(g 4 2T-T) with the redox dopants Magic Blue 37 and DDQ as well as the acid dopants PDSA and TFSI 38 (see Table 3 for chemical structures).In particular for 10 mol% Blue we observe a considerable increase in Young's modulus to E doped E (148 AE 20) MPa, corresponding to Z E 1.3.Intriguingly, the two acid dopants only cause a minor increase in stiffness despite a relatively high electrical conductivity, e.g.s E (11 AE 2) S cm À1 in case of TFSI.We have previously observed that 10 mol% of acid dopant lead to considerable p-stacking of p(g 4 2T-T). 38Intriguingly, p(g 4 2T-T) doped with 18 mol% TFSI features a T g E À49 1C (Fig. S11, ESI †), which is much lower than the value observed for p(g 4 2T-T) doped with 20 mol% F4TCNQ (see Table 1).Hence, the use of acid dopants may allow to create conducting materials that remain relatively soft.We also studied whether the type of side chain influences to which extent doping changes the modulus.Regioregular P3DDT features a relatively low Young's modulus of E neat E (45 AE 6) MPa (cf.Table 1), which increases to E doped E (80 AE 2) MPa upon sequentially doping with a saturated solution of F4TCNQ in AcN, for 3 days, corresponding to a figure of merit of only Z E 0.2 (gravimetric analysis indicates the uptake of 7 mol% F4TCNQ; s E (5 AE 1) Â 10 À3 S cm À1 ).
Conclusions
The polymer p(g 4 2T-T) with tetraethylene glycol side chains is very soft with a Young's modulus of only 8 MPa at room temperature due to a low degree of crystallinity and a low T g E À46 1C, measured with DMA.Molecular doping with F4TCNQ or F2TCNQ strongly enhances the degree of p-stacking of the polymer and increases the T g to 3 1C in case of an oxidation level O ox E 16.8%.As a result, the Young's modulus increases B29-fold to 232 MPa for p(g 4 2T-T) doped with F4TCNQ (O ox E 18.2%).Our findings are corroborated by molecular dynamics simulations.A comparison of less strongly doped samples with O ox E 5.7%, where doping with F4TCNQ mostly yields dianions, indicated that the charge of the counterions (i.e.À1 of anions or À2 of dianions) does not affect the stiffness of the doped polymer, suggesting that dianions do not lead to ionic type crosslinks.However, the choice of dopant influences the ductility and toughness of the doped polymer.Doping of p(g 4 2T-T) with F2TCNQ results in an up to 47-fold increase in Young's modulus to 377 MPa, which corresponds to the strongest relative increase reported for any conjugated polymer.Evidently, molecular doping is a powerful tool that can not only be used to adjust the electrical but also the mechanical properties of conjugated polymers, which may spur the field of flexible and stretchable electronics.
Sample preparation
Co-processed samples were prepared by adding solutions of the dopant in AcN (6 g L À1 for PDSA and 2 g L À1 for the rest of the dopants) to solutions of p(g 4 2T-T) in CHCl 3 (3 to 20 g L À1 to achieve different polymer : dopant ratios) and P3DDT in CHCl 3 (6 g L À1 ) together with further AcN to ensure a solvent ratio of 2 : 1 CHCl 3 : AcN.The dopant mol% is calculated per thiophene ring of the conjugated polymers.Thin films for spectroscopy were spin-coated at a speed of 1000-5000 rpm for 60 s onto glass slides for UV-vis spectroscopy or CaF 2 substrates for FTIR spectroscopy to achieve a film thickness of 35 to 190 nm.Thin films for vapor doping were spin-coated at 1000 rpm for 40 s onto silicon substrates using a solution of p(g 4 2T-T) in chlorobenzene (6 g L À1 ), followed by annealing for 10 minutes at 120 1C and drying under vacuum.Vapor doping was performed in a nitrogen atmosphere by exposing the p(g 4 2T-T) films to F4TCNQ vapor for 15 minutes.Free-standing samples with a thickness of 30 to 80 mm for mechanical testing were dropcast at 30 1C onto glass slides followed by removal from the substrate with a sharp blade.Neat p(g 4 2T-T) was frozen in liquid nitrogen prior to the removal of the polymer film from the substrate.Glass fiber supported samples were made through coating glass mesh strands cut at 451 with p(g 4 2T-T) (chlorobenzene, 10 g L À1 ), acetone fraction p(g 4 2T-T) (CHCl 3 , 10 g L À1 ) or a mixture of p(g 4 2T-T) + 3 mol% F4TCNQ, followed by drying at 30 1C under vacuum for 24 hours.The sample for shear rheometry was prepared in a nitrogen glovebox by heating 10 mg polymer up to 200 1C for 45 minutes and pulling vacuum to ensure no bubbles were present in the sample, followed by compressing it using about 1 N of force, and allowing it to cool.The sample diameter was 3 mm and disposable aluminum parallel plates were used.The thickness of thin and thick films was measured with a KLA Alphastep Tencor D-100 profilometer and a micro-caliper, respectively.
Differential scanning calorimetry (DSC)
DSC measurements were carried out under nitrogen at a flow rate of 60 mL min À1 with a Mettler Toledo DSC2 equipped with a Gas controller GC 200 system at a heating rate of 10 1C min À1 .
Fast scanning calorimetry (FSC)
Measurements were conducted under nitrogen with a Mettler Toledo Flash DSC 1.A small amount of the polymer was transferred directly to the FSC chip sensor.The sample was first heated to 150 1C to delete the thermal history and then cooled down to À50 1C with different cooling rates ranging from À0.1 K s À1 to À1000 K s À1 .Finally, the sample was heated with 600 K s À1 .The fictive temperature was calculated using Moynihan's area matching method or by extrapolation if the fictive temperature was below the onset of devitrification. 40
X-Ray scattering
Transmission wide-angle X-ray scattering (WAXS) was carried out with a Mat:Nordic instrument from SAXSLAB equipped with a Rigaku 003+ high brilliance micro focus Cu Ka-radiation source (wavelength = 1.5406Å) and a Pilatus 300 K detector placed at a distance of 88.6 mm from the sample.Grazing incidence wide angle X-ray scattering (GIWAXS) measurements were carried out at the Stanford Synchrotron Radiation Lightsource Experimental Station 11-3 using a sample-to-detector distance of 315 mm and an incidence angle of 0.151.
UV-vis absorption spectroscopy
UV-vis-NIR spectra were recorded with a PerkinElmer Lambda 1050 spectrophotometer.
Fourier transform infrared spectroscopy (FTIR)
Infrared absorption measurements were performed with a PerkinElmer FT-IR Spectrometer 'Frontier' on thin p(g 4 2T-T): F4TCNQ films coated on CaF 2 .
Electrical characterization
The electrical resistivity was measured on fresh films with a 4-point probe setup from Jandel Engineering (cylindrical probe head, RM3000) using co-linear tungsten carbide electrodes with an equidistant spacing of 1 mm.The in-line 4-point probe for films gives a measure of the sheet resistance R s = p/ln 2ÁV/I, where V and I are the voltage and current and p/ln 2 is a geometrical correction factor.The conductivity was calculated according to s = 1/(dR s ), where d is the film thickness.
Oscillatory shear rheometry
Measurements were carried out with a Rheometric Scientific ARES LS strain-controlled rheometer using a 3 mm aluminum parallel plate geometry, a strain of 0.2%, which was in the linear regime, and a frequency of 0.16 Hz.The temperature was increased from À80 1C to 180 1C at 5 1C min À1 .The sample preparation and measurement were carried out in inert nitrogen atmosphere.
Mechanical testing
Dynamic mechanical analysis (DMA) and tensile testing were performed using a Q800 dynamic mechanical analyzer from TA Instruments.To support neat p(g 4 2T-T) and polymer doped with 3 mol% and 6 mol% dopant during mounting, samples were fixated in a paper frame that was cut prior to tensile testing; all other samples were mounted without any support.DMA was carried out at a frequency of 1 Hz while ramping the temperature from À80 1C to 60 1C at a rate of 3 1C min À1 .A preload force of 0.003 N-0.009 N and a dynamic strain with a maximum value of 0.03%-0.05%was used for samples supported by a glass fiber mesh.A pre-load force of 0.01 N, a gauge length of 5.1 mm-5.6 mm and a dynamic strain with a maximum value of 0.3% was used for free-standing doped p(g 4 2T-T).DMA of free-standing neat p(g 4 2T-T) was performed at 1 Hz by cooling from 22 1C to 0 1C at a rate of À3 1C min À1 with a pre-load force of 0.01 N, a gauge length of 4.3 mm and a dynamic strain with a maximum value of 0.02%.Tensile testing was performed in a controlled force mode with a force rate of 0.005 N min À1 using a gauge length of 3.8 mm-7 mm.
Molecular dynamics (MD) simulations
A parallel MD simulator, LAMMPS package was used to perform all-atom MD simulations with the general AMBER force field (GAFF) as implemented in moltemplate code. 41The Lennard-Jones and Coulombic interactions were cutoff at 1.1 nm, and a k-mean scheme of particle-particle particle-mesh was used for long range Coulombic interactions as implemented in the LAMMPS package.All MD simulations were carried out with a 1.0 fs time step.The initial structure and partial atomic charges of molecules for MD simulations were obtained from geometry optimization and electrostatic potential (ESP) calculation, respectively, using density functional theory (DFT) with the oB97XD functional and the 6-31G(d) basis set as implemented in Gaussian (Fig. S4, ESI †).200 oligomer chains consisting of four g 4 2T-T repeat units, with a charge of 0, +1, +2 or +4, were placed in a rectangular computational box of 20 Â 20 Â 20 nm 3 together with F4TCNQ anions or dianions to achieve charge neutrality (see Table S1, ESI †).The solid-state nanostructure was modelled by the following procedure: (1) initial equilibration at 800 K in an isochoric-isothermal (NVT) ensemble for 2 ns and then in an isothermal-isobaric (NPT) ensemble at 0 atm for 5 ns using the Nose-Hoover thermostat and barostat, while allowing the computational box size to decrease, (2) equilibration at 800 K in a microcanonical ensemble for 1 ns using temperature control by a Langevin thermostat and then in a NPT ensemble at 0 atm for 1 ns, and (3) a cooling step from 800 to 300 K at a rate of 0.5 K ps À1 in a NPT ensemble at 0 atm followed by equilibration in a NPT ensemble for 5 ns.
Fig. 2 Table 2
Fig. 2 (a) Shear storage and loss modulus, G 0 and G 00 , and tan d = G 00 /G 0 of p(g 4 2T-T) as a function of temperature; (b) tensile storage and loss modulus, E 0 and E 00 , and tan d = E 00 /E 0 of neat p(g 4 2T-T) (orange/yellow) and p(g 4 2T-T) doped with 20 mol% F4TCNQ (black/grey/blue) recorded as a function of temperature; neat p(g 4 2T-T) was only analyzed by cooling from 22 1C to 0 1C because it was difficult to keep the material intact over a wider temperature range.
Fig. 4
Fig. 4 (a) Snapshots of equilibrated nanostructures obtained from molecular dynamics (MD) simulations of p(g 4 2T-T) oligomers (blue) and F4TCNQ anions (green); the tetraethylene glycol side chains of the oligomers and the cyano groups of the anions are omitted; (b) radial distribution function g t-t (r) after MD equilibration of the distance r between the center of mass of thiophene rings that are part of different p(g 4 2T-T) oligomers; and (c) g t-b (r) of the distance r between the center of mass of thiophene rings and the center of mass of the benzene ring of F4TCNQ anions.
Fig. 5
Fig.5(a) Transmission FTIR absorbance spectra, with the absorbance A normalized by the film thickness d, of p(g 4 2T-T) doped with 3 mol% F4TCNQ (blue; O ox E 5.7%) and 6 mol% F2TCNQ (green; O ox E 6.4%); (b) stress-strain curves recorded at room temperature by tensile deformation of freestanding samples of p(g 4 2T-T) doped with 3 mol% F4TCNQ (blue) and 6 mol% F2TCNQ (green); (c) snapshots from equilibrated MD simulations of p(g 4 2T-T) oligomers with a charge of +1 (O ox E 8.3%) neutralized with F4TCNQ anions (left) and F4TCNQ dianions (right); (d) radial distribution function g t-t (r) after MD equilibration of the distance r between the center of mass of thiophene rings of different oligomers for neutral oligomers (grey) and oligomers with charge +1 neutralized with F4TCNQ anions (green) and F4TCNQ dianions (blue).
Fig. 6
Fig. 6 (a) Stress-strain curves recorded at room temperature by tensile deformation of free-standing samples of neat p(g 4 2T-T) (red) and the polymer doped with F4TCNQ (blue) resulting in an oxidation level per thiophene ring O ox ranging from 5.7 to 18.2%; inset: Photograph of a doped polymer sample clamped in a DMA instrument prior to tensile deformation; (b) Young's modulus E (black) and conductivity s (red) of p(g 4 2T-T) doped with F4TCNQ; (c) s vs. E of p(g 4 2T-T) doped with F4TCNQ (blue) and F2TCNQ (green).
Table 1
Elastic modulus at room temperature before and after doping, E neat and E doped , as well as a figure of merit Z = log(E doped /E neat ) reported for unaligned polythiophenes.Note that the dopant concentration in mol% is calculated per repeat unit in case of the poly(3-alkylthiophene)s but per thiophene ring in case of p(g 4 2T-T) (see Fig.1for chemical structure) | 8,628.2 | 2021-11-17T00:00:00.000 | [
"Materials Science"
] |
Simulating Multilevel Dynamics of Antimicrobial Resistance in a Membrane Computing Model
The work that we present here represents the culmination of many years of investigation in looking for a suitable methodology to simulate the multihierarchical processes involved in antibiotic resistance. Everything started with our early appreciation of the different independent but embedded biological units that shape the biology, ecology, and evolution of antibiotic-resistant microorganisms. Genes, plasmids carrying these genes, cells hosting plasmids, populations of cells, microbial communities, and host's populations constitute a complex system where changes in one component might influence the other ones. How would it be possible to simulate such a complexity of antibiotic resistance as it occurs in the real world? Can the process be predicted, at least at the local level? A few years ago, and because of their structural resemblance to biological systems, we realized that membrane computing procedures could provide a suitable frame to approach these questions. Our manuscript describes the first application of this modeling methodology to the field of antibiotic resistance and offers a bunch of examples—just a limited number of them in comparison with the possible ones to illustrate its unprecedented explanatory power.
of AbA as the aminopenicillins, AbC as cefotaxime-ceftazidime, and AbF as fluoroquinolones (FLQs), using the initials of three of the major groups of antibiotics used in clinical practice ( Table 1).
The basic scenario in the hospital and community compartments. (i) Dynamics of bacterial resistance phenotypes in Escherichia coli. Waves of successive replacements of resistance phenotypes in hospital-based E. coli strains during 20,000 time steps (about 2.3 years, as the time steps represent approximately 1 h/step) are illustrated in Fig. 1. The main features of this process, mimicking clonal interference, are as follows: (i) a sharp decrease in the density of the fully susceptible phenotype (pink line); (ii) a rapid increase of the phenotype AbAR (aminopenicillin resistance), resulting from the transfer of the plasmid with AbAR to the susceptible population and consequent selection (red); (iii) increase by selection and, marginally, by acquisition of mutational resistance of the phenotype AbFR (fluoroquinolone resistance) (violet); (iv) increase of double resistances AbAR and AbFR by acquisition of an AbFR mutation with the organisms of AbAR-only phenotype and by the transfer of the plasmid encoding AbAR from the AbAR-only phenotype to the AbFR-only phenotype (brown); (v) increase of the phenotype with double resistances AbAR and AbCR by capture by the AbAR-only predominant phenotype of a plasmid containing AbCR (cefotaxime resistance) that originated in Klebsiella pneumoniae (light blue); (vi) almost simultaneous emergence but later predominance of the multiresistant organisms with phenotype AbAR, AbCR, and AbFR by mutational acquisition of AbFR by the doubly resistant phenotype AbAR-AbCR and, also, of the plasmid-mediated AbCR by the AbAR-AbFR phenotype (dark blue); (vii) close in time, emergence (but with low density) of the phenotype AbCR-only by the acquisition of the plasmid encoding AbCR by the fully susceptible phenotype and the AbAR phenotype and loss of plasmid-mediated AbAR by incompatibility with the incoming plasmid (light green); and (viii) the acquisition of the AbFR mutation by the AbCR-only phenotype or by plasmid reception of an AbCR trait from K. pneumoniae in AbFR, giving rise to the phenotype AbCR-AbFR (olive green). In the community, where the antibiotic exposure is less frequent, a similar dynamic sequence occurs but at a much lower rate (Fig. 2). (ii) Dynamics of bacterial species. Antibiotic use and antibiotic resistance influence the long-term dynamics of bacterial species in the hospital environment ( Fig. 2C and D). Under the conditions of our basic scenario, E. coli populations (black) tend to prevail. Enterococcus faecium (violet) and K. pneumoniae (yellow-green) populations were maintained during the experiment. In the community, E. coli has a stronger dominance over other species, and similar dynamics occur as in the hospital, at lower rates.
Klebsiella pneumoniae (Fig. S3) is intrinsically resistant to AbA, and in our case it harbors a plasmid encoding AbCR (cefotaxime [CTX]) and a mutation encoding AbFR (fluoroquinolone [FLQ]). In the hospital, the AbCR phenotype is readily selected. However, because of the high density of E. coli populations with the plasmid-mediated AbAR, several Klebsiella strains receive this plasmid. These Klebsiella strains receive no benefit from this plasmid because they are intrinsically aminopenicillin resistant, but incompatibility with the plasmid determining AbCR occurs, eliminating AbCR from the recipients and giving rise to the phenotype AbAR-AbFR (purple). That contributes to the decline in AbCR-containing phenotypes (olive green). In any case, the dominance of E. coli prevents significant growth of K. pneumoniae. Enterococcus faecium (Fig. S3) is intrinsically resistant to AbC (AbCR, CTX), but there are two variants, one AbA (aminopenicillin [AMP]) susceptible and the other AbA resistant, the latter of which has AbFR also. However, the AbAS variant can acquire the AbAR trait from the resistant one by (infrequent) horizontal genetic transfer and can become an AbAR donor. There is replacement dynamics of AbAS by the AbAR phenotype.
(iii) Influence of baseline resistance composition on the dynamics of bacterial species. The local evolution of antibiotic resistance can depend on the baseline composition of susceptible and resistant bacterial populations (Fig. 3). In a baseline scenario, we consider a density of 8,600 h-cells (1 h-cell ϭ 100 identical cells; see the section "Quantitative structure of the basic model application" below) of E. coli among which 5,000 cells are susceptible, 2,500 have plasmid-mediated aminopenicillin resis- Multilevel Modeling in Antibiotic Resistance ® tance (PL1-AbAR), 1,000 have fluoroquinolone resistance (AbFR), and 100 combine both resistances. To mimic a "more-susceptible scenario," values were changed to 8,000 susceptible cells, 500 with PL1-AbAR, 50 with AbFR, and 50 with PL1-AbAR and AbFR. Higher proportions of susceptible E. coli cells facilitate the increase in the populations of the more resistant organisms, K. pneumoniae and AbAR E. faecium. Because of the selection of K. pneumoniae (olive green) harboring cefotaxime resistance (PL1-AbCR) and because of the ability of transfer of the PL1 plasmid to E. coli, the proportion of E. coli cells with cefotaxime resistance (mainly light and dark blue) increases in the scenario with a lower resistance baseline for E. coli. This example illustrates the hypothesis that a higher prevalence of resistance in the E. coli component of the gut flora might reduce the frequency of other resistant organisms, which might inspire interventions directed to restore the susceptibility in particular species (10,11).
(iv) Single-clone E. coli dynamics: influence of baseline resistances. In the previous analysis, subpopulations of E. coli were characterized by their antibiotic resistance phenotype (phenotype populations). Alternatively, we can follow the evolution of four independent E. coli clones, each tagged in the model with particular signals (unrelated with AbR), namely, E. coli clone 0 (Ecc0), EccA, EccF, and EccAF (see Table 1), and, starting with specific resistance traits, allowing for the possibility that the frequency of these "ancestor clones" within a clone might change through time by the gain or loss of a trait. Fig. 4 shows the densities of these ancestor clones through time. The details of the sequential trait acquisitions for each of these clones are shown in Fig. S2. The fully susceptible E. coli clone (Ecc0) first acquires AbAR (red) and AbCR (green). The AbAR phenotype facilitates the capture by lateral gene transfer of AbCR (CTX), giving rise to the double AbAR-AbCR phenotype (light blue). The incorporation of AbF-R (violet; FLQ) in the fully susceptible clone occurs early (later in the AbAR population) such that the rise of the multiresistant phenotype (dark blue) occurs later and again at low numbers. The presence of the AbAR trait in the clone at time zero (EccA) increases the success of the clone and includes the acquisition of AbFR and the multiresistant phenotype. Interestingly, the presence of AbFR (fluoroquinolone resistance) at the origin (EccF) was critical for the enhancement of the numbers of doubly resistant and multiresistant phenotypes. The clones that were more susceptible at the origin remain relatively stable in numbers, suggesting that clonal composition tends to level off along the continued challenges under antibiotic exposure. (v) Dynamics of mobile genetic elements and resistance traits. We consider E. coli, K. pneumoniae, and Pseudomonas aeruginosa to be members of a "genetic exchange community" (12, 13) for the plasmid PL1. As shown in Fig. 5, we can compare the evolutionary advantage of the same resistance phenotypic trait (AbAR) harbored in a plasmid, as in E. coli, to that of the trait harbored in the chromosome, as in K. pneumoniae. The overall success of the PL1 plasmid (blue line) benefits from the fact that this mobile element is selected by two different antibiotics (AbA and AbC; resistance shown in red and green lines, respectively). Interestingly, resistance to AbFR (violet) is selected from early stages of the experiment, and after 4,000 steps it converges with the AbCR, a plasmid-mediated trait, meaning that this plasmid is maintained almost exclusively in strains harboring AbFR genes, similarly to empirical findings (14,15). When the conjugation rate of PL1 was increased, the main effect was the reduction in selection of K. pneumoniae, as the predominance of the PL1-AbAR plasmid from the more abundant populations of E. coli tended to dislodge PL1-AbCR from K. pneumoniae (results not shown).
Dynamics under conditions of changing scenarios in the hospital and community compartments. (i) Frequency of patient flow between hospital and community. The frequency of exchange of individuals between the hospital and the community (hospital admission and discharge rates) influences the evolution of antibiotic resistance (Fig. 6). This occurs because sensitive bacteria enter the hospital with newly admitted patients from the community (where resistance rates are low), and this ''immigration'' allows sensitive bacteria to ''wash out'' resistant bacteria (16). Multiresistant E. coli strains emerge much earlier with decreased flow rates, because bacteria resistant to individual drugs have more time to coexist and thus to exchange resistances by gene flow and because the length of "frequent exposure" to different antibiotics (and, consequently, selection) increases (17). The effect of the slow flow of patients to the community is a late reduction in multiresistance (AbAR-AbCR-AbFR) and an earlier reduction in double resistances (AbAR-AbFR and AbAR-AbCR). In the community compartment, however, multiresistance increases when the flow from the hospital is more frequent.
(ii) Frequency of patients treated with antibiotics. Higher proportions of patients exposed to antibiotics increase selection of antibiotic resistance (16). We analyzed this effect in our model, considering proportions of 20%, 10%, and 5% of patients exposed to 7 consecutive days of antibiotic therapy at four doses per day (Fig. 7). If a high proportion (20%) of patients are treated, E. coli multiresistance is efficiently selected, as well as K. pneumoniae and E. faecium resistance. If this proportion is reduced to 10% (and, particularly, to 5%), there is a substantial reduction in the amount of resistant E. coli cells and the emergence of multiresistant bacteria is delayed (individual resistance data not shown for these species). However, the evolution of E. coli toward more multiresistance partially counteracts the selective advantage of these species, restricting their growth to some extent, even under conditions of high densities of treated patients.
(iii) Frequency of bacterial transmission rates in the hospital. Transmission of bacteria (i.e., any type of bacteria, including commensals) among individuals in the hospital influences the spread of antibiotic resistance. The effect of transmission rates of 5% and 20% per hour was analyzed (Fig. 8), and the results expressed the proportion of individuals that acquired any kind of bacteria from another individual per hour. These rates might appear exceedingly high, indicating very frequent transmission between hosts, but we refer here to rates of cross-colonization involving "any type of bacteria." Normal microbiota transmission rates between hosts have never been measured, such measurements probably requiring a complex metagenomic approach (18). Differences in effects on evolution of E. coli phenotypes in comparisons of 10% and 20% colonization rates are unclear; perhaps 10% transmission produces full effects and 20% does not add much more. The subtractive representation allows discernment of a global advantage for the multiresistant phenotypes (AbAR-AbCR-AbFR) when the proportion of interhost transmission rises from 5% to 20%. The monoresistant AbAR phenotype tends to be maintained longer under conditions of low contagion rates. Note that multiresistant phenotype "bursts" occur (dark blue spikes in the figure) also with low contagion rates (5% box in Fig. 8) and that "bursts" of less-resistant bacteria (red spikes) also occur with high contagion rates (20% box). Notice that the increase in cross-colonization rates favors the transmission not only of resistant populations but also of the more susceptible ones, to a certain extent compensating for the spread of the resistant-phenotype populations.
(iv) Size of transmitted bacterial load. The absolute number of intestinal bacteria that are transmitted from one host to another one is certainly a factor influencing the acquisition of resistant (or susceptible) bacteria by the recipient. However, this number is extremely difficult to determine, as it depends not only on the mechanism of transmission (19,20) but also on the possibility that the recipient might have already harbored bacterial organisms indistinguishable from those that are transmitted (21). On the other hand, efficient transmission able to influence colonic microbiota depends on the number of bacteria in the donor host and on the ability of different bacteria to colonize not only in the lower intestine but also in intermediate locations in the body, probably including the mouth or upper intestine (22). To evaluate the potential effect of different bacterial loads acting as inocula, we considered a final immigrant population reaching the colonic compartment equivalent to 0.1%, 0.5%, and 1% of the donor microbiota. As in previous cases, the evolution of multiresistance favored E. coli (Fig. S4). Multiresistant E. coli emerges earlier and reaches higher levels in higher-count inocula, but less-resistant strains are maintained because the higher-count inocula also contain more susceptible bacteria.
(v) Intensity of the effect of antibiotics on bacterial populations. The issue of the relationship of the "potency" (intensity of antibacterial activity) of antibiotics to the selection of resistance has been a matter of recent discussions (23)(24)(25)(26). To illustrate the point, we changed the bactericidal effect of the antibiotics used in the model. Clinical species were killed at rates of 30% and 15% (reflecting a population decrease) in the first and second hour of exposure, respectively, and these rates were then decreased to 7.5% to 3.75%. Note that these modest killing rates are intended to reflect the diminished effect of antibiotics in slow-growing clinical bacteria located in a complex colonic microbiome. The more susceptible E. coli phenotypes are maintained for longer periods when the killing intensity of antibiotics is lower; in contrast, the multiresistant phenotype emerges earlier and reaches higher numbers when the intensity of antibiotic action increases (Fig. 9). Under conditions of high antibiotic intensity, there is also a (small) increase in the levels of the resistant K. pneumoniae and E. faecium phenotypes. This experiment shows that a high rate of elimination of the more susceptible bacteria favors the colonization by the more resistant ones.
(vi) Intensity of the antibiotic effect on colonic microbiota. The proportion of the colonic microbiota killed by antibiotic treatment (and, thus, the size of the open niche for other strains to multiply) constitutes an important factor in the multiplication of potentially pathogenic bacteria and hence affects acquisition (mutational or plasmidmediated) of resistance and transmission to other hosts. In the basic model, the rates of reduction of the population were 25% for AbA, 20% for AbC, and 10% for AbF; in an alternative scenario, these proportions were modified to 10%, 5%, and 2%, respectively. The results of this change were impressive (Fig. 10): the numbers of bacteria were reduced but also the evolution toward antibiotic resistance (EC) occurred at a lower rate, and even if the proportions of resistance phenotypes were to steadily increase through time, the absolute numbers would not grow, thus limiting host-to-host transmission. (vii) Strength of antibiotic selection on resistance traits. The strength of antibiotic selection is an important parameter in the evolutionary biology of antibiotic resistance (27). Our computational model allows heuristic acquisition of knowledge about the strength of selection of an antibiotic for a particular resistance trait, considering how the resulting trend is (or is not) compatible with the observed reality. An example is the following unanswered question: does plasmid-mediated cefotaxime resistance (AbCR) also provide protection against aminopenicillins (AbAR)? Strains harboring TEM or SHV extended-spectrum beta-lactamases hydrolyzing cefotaxime probably retain sufficient levels of aminopenicillin hydrolysis to be selected by aminopenicillins. However, the cefotaxime-resistant/aminopenicillin-susceptible phenotype is rare in hospital isolates. In our model, this was investigated by providing different strengths of ampicillin (AbA) selection for a cefotaxime-resistant phenotype (AbCR) as follows: no selection (0%), selection in only 10% of the cases (10%), and full selection (100%). The implementation of the model (Fig. S5) showed that if ampicillin were able to select for cefotaxime resistance, the aminopenicillin-susceptible and cefotaximeresistant phenotype should be prevalent from early stages. This is not what is observed in the natural hospital environment, suggesting that ampicillin is not a major selector for cefotaxime resistance.
DISCUSSION
The rate of antibiotic resistance among bacterial species in a given environment is the result of the interaction of biological elements within a framework determined by many local variables, constituting a complex parameter space (28)(29)(30). There is a need to consider (in an integrated way) how changes in these parameters might influence the evolution of resistant organisms. This endeavor requires the application of new computational tools that should consider the nested structure of the microbial ecosystems, where mechanisms of resistance (genes) can circulate in mobile genetic elements among bacterial clones and species belonging to genetic exchange commu-
FIG 9
Influence of the activity of the antibiotic on E. coli phenotypes (left) and the species composition (right). (Upper panels) Susceptible bacteria were eliminated at rates of 30% after the first hour of exposure and 15% after the second hour. (Lower panels) The elimination rates were lower: 7.5% after the first hour and 3.75% after the second hour. Colors are as described for Fig. 1 and 2.
Multilevel Modeling in Antibiotic Resistance ® nities (12, 13) located in different compartments (as in the hospital or the community). A number of different factors critically influence the evolution of this complex system, such as antibiotic exposure (frequency of treated patients, drug dosages, the strength of antibiotic effects on commensal bacterial communities, and the replication rate of the microbial organisms), as well as the fitness costs imposed by antibiotic resistance, the rate of exchange of colonized hosts between compartments with different levels of antibiotic exposure (hospital and community), or the rates of cross-transmission of bacterial organisms among these compartments. The challenge that we are addressing in this work is that of simultaneously combining for the first time all these factors (and potentially more) in a single computing model to understand the selective and ecological processes leading to the selection and spread of antibiotic resistance. In comparison with the available classic mathematical models that have been applied to the study of evolution of antibiotic resistance (31), the one we are discussing in this work is far more comprehensive in terms of the level of capture of the multilevel parametric complexity of the phenomenon. Note that results obtained with the model and presented here correspond to only a very limited number of possible "computational experiments," chosen to show the possibilities of the model, but that virtually unlimited numbers of other experiments, with different combinations of parameters, are feasible à la carte with a user-friendly interface. In addition, our model can illustrate principles, generate hypotheses, and guide and facilitate the interpretation of empirical studies (32,33). Examples of these heuristic predictions are that resistance (lower antibiotic effect) in colonic commensal flora can minimize colonization by resistant pathogens, the possible minor role of aminopenicillins in the selection of extendedspectrum beta-lactamases (AbCR), or the possibility of the presence of plasmids conferring aminopenicillin resistance in K. pneumoniae (phenotypically "invisible," as this organism has chromosomal resistance to the drug).
Our results are presented in terms of the ensemble of biological entities contained in the whole landscape (for instance, in the hospital), aggregated across individual hosts. This "pooling" approach, which originated in ecological studies, has already been used in studies of antibiotic resistance (34). Environments (such as the hospital) are depicted as single "big world" units colonized by "big world populations," including those that are antibiotic resistant but also the susceptible ones, which can limit the spread of resistance-in a sense, "spreading health" (35). In this scenario, how might antibiotics modify the available colonization space (36,37)? Our model includes elimination of part of the global colonic microbiota with antibiotic use, favoring the colonization of resistant organisms, which were previously in the minority.
We can reproduce the successive "waves" of increasingly resistant phenotypes in our computational experiments, mimicking the clonal interference phenomenon (38). We show that the speed and intensity of this process depend on the global resistance landscape and the density and phenotype of the bacterial subpopulations. Our model predicts that previous mutational ciprofloxacin resistance facilitates fast evolution of multiresistance by horizontal acquisition of resistance genes (14,15). We also show that the long-term dissemination of chromosomally encoded genes is far less effective than the spread of traits encoded in transferable plasmids, even though some limitations are detectable because of plasmid incompatibility. A frequently overlooked aspect of antibiotic resistance suggested by the results of our membrane computing experiments is that, over the long term, the evolution of multiresistance probably favors some predominant species such as E. coli, where there is also an increasing benefit for the more resistant clones.
The consequences of changes in the transmission and treatment rates of the hospital and the community were also explored in our model. Several mathematical models have been used to investigate these changes also (16,(37)(38)(39)(40)(41)(42)(43)(44)(45). It is clear that reducing discharges and admissions of patient in hospitals has the effect of increasing the local rates of antibiotic resistance, but in our model, increases in the proportions of antibiotic-treated patients in the hospital have a stronger effect, stressing the importance of precision in prescribing antibiotic therapy (44). The increasing rates of hospital cross-colonization also influence the rise of resistance, but this effect seems lower than expected, probably because higher transmission rates also assure transmission of the more susceptible antibiotic populations, in a kind of "washing out" process of resistance, such as that which occurs when the community-hospital flow increases (16). The model also predicts that increases in the "amount" of bacteria transmitted between hosts favor increases of antibiotic resistance. We considered another frequently overlooked factor, namely, the consequences of increases in the "intensity" (aggressiveness) of the antibiotic therapy because of frequent dosage and particularly in terms of its ability to reduce the populations of colonic microbiota and, therefore, the "colonization resistance" for resistant opportunistic pathogens (46).
Precise data are not always easy to obtain, and the type of mathematical or computational models should influence the results of predictions (47). However, because of the functional analogy of membrane computing with the biological world, we hypothesize that the trends revealed in our computational model reflect general processes in the evolutionary biology of antibiotic resistance. If the model were fed with objective data extracted from a real landscape (which would be possible with a user-friendly interface), it could provide a reasonable expectation of the potential evolutionary trends in the particular environment and could support the adoption of corrective interventions (48). Validation of this computational model is the next necessary step; in an approach to this goal, we are developing an "experimental epidemiology" model where the parameters could be altered and measured (49) and are also planning prospective hospital-based observations.
Finally, we stress that the type of membrane computing model that was applied in this work can be easily escalated or adapted to a variety of applications in systems biology (50,51) and in particular can be used to support efforts to understand complex ecological systems with nested hierarchical structures and involving microorganisms (52).
MATERIALS AND METHODS
Software implementation and computing model. All computational simulations were performed using an updated version of ARES (Antibiotic Resistance Evolution Simulator) (8), which is the software implementation of a P system for the modeling of antibiotic resistance evolution. This P system model works with objects and membranes distributed in different regions organized in a tree-like structure as in the P system classic model but now with more-specific rules: the "object rules" can modify an object (evolution rules) or move the object out, in, or between membranes; and the "membrane rules" can move membranes out, in, or between regions that contain them as "object rules" and can dissolve and duplicate membranes. When a membrane is dissolved, all the membranes and objects inside disappear. For duplication, we can define which objects are to be duplicated and which ones are to be distributed; the membranes are always distributed. The implementation of our P system uses a stochastic method to apply the rules (the rules being ordered by priorities), and each rule has a "probability" to be applied. Other computational objects can be introduced, either to tag particular membranes or to interact with the embedded membranes, for instance, mimicking antibiotics, according to a set of preestablished rules and specifications. We obtain an evolutionary scenario that includes several types of nested computing membranes emulating entities such as (i) resistance genes, located in the plasmid, in other conjugative elements, or in the chromosome; (ii) plasmids and conjugative elements transferring genes between bacterial cells; (iii) bacterial cells; (iv) microbiotas where different bacterial species and subspecies (clones) can meet; (v) hosts containing the microbiotic ensembles; and (vi) the environment(s) where the hosts are contained. The current version of ARES (2.0) can be freely downloaded at https://sourceforge .net/projects/ares-simulator/. ARES 2.0 runs in any computer (is a Java application), albeit it is highly recommendable to install it in at least a 4-by-6-core server with 128 GB of RAM. The original ARES Web site at http://gydb.org/ares offers sections with information about the rules and parameters currently used by ARES. Anatomy of the model application. The current application of the model was structured accordingly with the following composition: (i) compartments containing individual hosts at particular densities, mimicking a hospital (H) and a community environment (C) (flux of individuals between the two compartments occurs at variable rates, mimicking admission or discharge from the hospital); (ii) clinically relevant bacterial populations colonizing these hosts, consisting of the species Escherichia coli, Enterococcus faecium, Klebsiella pneumoniae, and Pseudomonas aeruginosa. These populations diversify from their initial phenotype by acquisition of mutations and/or mobile genetic elements and of PL1 plasmids circulating in E. coli, K. pneumoniae, and P. aeruginosa or of conjugative elements (CO1) in E. faecium. The cell can maintain two copies of the PL1 plasmid (containing resistance to AbA [PL1-AbAR] or AbC [PL1-AbCR]) but not more, so that when a third copy of the PL1 plasmid enters the cell, one of the three is stochastically removed. AbCR produces some degree of resistance to AbA, and we believe that this antibiotic also (in 10% of the cases) selects cells containing plasmid PL1-AbCR. CO1 is an E. faecium "plasmid-like" mechanism of transfer of chromosomal gene AbAR (CO1-AbAR); a single copy of CO1-AbAR exists in the receiving host. Acquisition of (extrinsic) resistance to AbA (AbAR) is mediated by acquisition of PL1 (or CO1), resistance to AbC (AbCR) by acquisition of PL1 containing the AbCR resistance determinant, and resistance to AbF (AbFR) by mutation. Note that the following results occur in our representations: for example, when Ec0 (susceptible) receives PL1 with AbAR, it becomes EcA; when it receives PL1 with AbCR, it becomes Ec2C; and when Ec0, Ec1, and Ec2 mutate to AbFR, they become EcF, EcAF3, and EcCF, respectively. The acquisition of PL1 with AbAR by EcCF or of PL1 with AbCR by EcAF produces the multiresistant strain EcACF.
Quantitative structure of the basic model application. (i) Hospitalized hosts in the population. The data corresponding to the number of hosts in the hospital and community environments reflect an optimal proportion of 10 hospital beds per 1,000 individuals in the community (https://data.oecd.org/ healtheqt/hospital-beds.htm). In our model, the hospital compartment has 100 occupied beds and corresponds to a population of 10,000 individuals in the community.
The rates of admission and discharge from hospital are equivalent at 3 to 10 individuals/population of 10,000/day (https://www.cdc.gov/nchs/nhds/index.htm). In the basic model, 6 individuals from the community are admitted to the hospital and 6 are discharged from the hospital to the community per day (at approximately 4-h intervals). Patients are stochastically admitted or discharged, meaning that about 75% of the patients stay in the hospital between 6 and 9 days.
The bacterial colonization space of the populations of the clinical species considered here (Table 1) and of other basic colonic microbiota populations is defined as the volume occupied by these bacterial populations. Under natural conditions, the sum of these populations is estimated in 10 8 cells per ml of the colonic content. Clinical species constitute only 1% of the cells in each milliliter and have a basal colonization space of 1% of each milliliter of colonic content (0.01 ml). How these spaces are considered for counting populations in the model is explained in the next section.
The ensemble of other populations of microbiota is considered in our basic study model as an ensemble surrounded by a single membrane. The colonic space occupied by these populations can change because of antibiotic exposure. Throughout a course of treatment (7 days), the antibiotics AbA, AbC, and AbF reduce the intestinal microbiota 25%, 20%, and 10%, respectively. As an example, if we consider that 10% of the basic colonic populations was eliminated by antibiotic exposure, their now empty space (0.1 ml) would be occupied by antibiotic-resistant clinical populations and by the colonic populations that survived the challenge. In the absence of antibiotic exposure, the colonic populations are restored to their original population size in two months. Clinical populations are comparatively faster in colonizing the empty space.
Campos et al.
(ii) Populations' operative packages and counts.
To facilitate the process of model running, we consider that a population of 10 8 cells in nature is equivalent to 10 6 cells in the model. In other words, one "hecto-cell" (h-cell) in the model represents an "operative package" of 100 cells in the real world. Because of the very high effective population sizes in bacteria, these 100 cells are considered representative of a uniform population of a single cell type. A certain increase in stochasticity might occur because of using h-cells; however, run replicates do not differ significantly (see Fig. S1 in the supplemental material). Also, for computational efficiency, we considered that each patient (in a hospital) or individual (in the community compartment) is represented in the model by 1 ml of its colonized colonic space (about 3,000 ml) and refer to the corresponding value as a "host-ml." Consequently, in most of the figures we represent our results as a numbers of h-cells in all hosts per milliliter.
(iii) Quantitative distribution of clinical species and clones. In the basal scenario, the distribution of species in these 1,000,000 cells (contained in 1 ml) is as follows: for E. coli, 860,000 cells, including 500,000 susceptible cells, 250,000 cells containing PL1-AbAR, 100,000 cells with the AbFR mutation, and 10,000 cells with both PL1-AbAR and the AbFR mutation; for E. faecium, 99,500 AbA susceptible and 20,000 AbAR; for K. pneumoniae, 20,000 with chromosomal AbAR, PL1-AbCR, and AbFR; for P. aeruginosa, 500 containing PL1-AbCR. At time zero, the distributions are identical in hospitalized and community patients.
(iv) Tagging starting clone populations in E. coli. To be able to follow the evolution of particular lineages inside E. coli, four ancestral clones (Ecc) were distinguished, differing in the original resistance phenotype (with Ecc0 as a fully susceptible clone, EccA harboring PL1 determining AbAR, EccF harboring AbFR, and EccAF with PL1-AbAR and AbFR) ( Table 1). Each of these clones is tagged at time zero with a distinctive "object" in the model which remains fixed to the membrane, multiplies with the membrane, and is never lost. Each of the daughter membranes throughout the progeny can alter its phenotype by mutation or lateral gene acquisition, but the ancestral clone remains detectable.
(v) Multiplication rates. We consider the basal multiplication rate (of 1) the rate corresponding to Ec0, where each bacterial cell gives rise to two daughter cells every hour. In comparison, the rate for E. faecium is 0.85, that for K. pneumoniae is 0.9, and that for P. aeruginosa is 0.15. The acquisition of a mutation, a plasmid, or a mobile element imposes an extra cost corresponding to a value of 0.03. Therefore, the rate for Ec0 is 1, that for EcA is 0.97 (because of the cost of PL1-AbAR), that for EcC is 0.97 (cost of PL1-AbCR), that for EcF is 0.97 (cost of mutation), that for EcAF is 0.94 (PL1-AbAR and AbFR), that for Ef(1) is 0.85, that for Ef(2) is 0.79 (CO1-AbAR and AbFR), that for K. pneumoniae is 0.84 (PL1-AbAR and AbFR), and that for P. aeruginosa with PL1-AbCR is 0.12 (PL1-AbCR). The number of cell replications is limited according to the available space (see above).
Transfer of bacterial organisms from one host to another is expressed by the proportion of individuals that can stochastically produce an effective transfer of commensal or clinical bacteria or susceptible or resistant bacteria to another individual (contagion index [CI]). If contagion is 5% (i.e., if the CI ϭ 5), that means that among 100 patients, 5 "donors" transmit bacteria to 5 "recipients" per hour. In the case of the basic scenario, CI ϭ 5 in the hospital and CI ϭ 1 in the community (all data corresponding to results with CI ϭ 0.01 are available on request). In the basic scenario, donors contribute to the colonic microbiota of recipient individuals with 0.1%, 0.5%, and 1% of their own bacteria. These inocula do not necessarily reflect the number of cells transferred but do reflect endogenous multiplication after transfer, as proposed in other models (53). In any case, cross-transmission is responsible for most new acquisitions of pathogenic bacteria (54).
The frequency of plasmid transfer between bacteria occurs randomly and reciprocally at equivalently high rates among E. coli and K. pneumoniae populations; in the basic model, the rate is 0.0001, representing one effective transfer occurring in 1 of 10,000 potential recipient cells. Plasmid transfer occurs at a lower rate of 0.000000001 in the interactions of E. coli and K. pneumoniae with P. aeruginosa. Conjugative element-mediated transfer of resistance among E. faecium populations occurs at a frequency of 0.0001, but E. faecium bacteria are unable to receive resistance genes from or donate resistance genes to any of the other bacteria considered. In the case of E. coli and K. pneumoniae plasmids, we consider plasmid limitation in the number of accepted plasmids such that if a bacterial cell with two plasmids receives a third plasmid, there is a stochastic loss of one of the residents or the incoming plasmid but all three cannot coexist in the same cell.
Mutational resistance is considered only in the present version of the model for resistance to AbF (fluoroquinolones). Organisms of the model-targeted populations mutate to AbF at the same rate, i.e., 1 mutant every 10 8 bacterial cells per cell division.
(vi) Antibiotic exposure. In the basic model, 5%, 10%, or 20% of the individuals in the hospital compartment are exposed to antibiotics each day, each individual being exposed (treated) for 7 days. In the community compartment, 1.3% of individuals are receiving treatment, with each of them also exposed to antibiotics for 7 days. Antibiotics AbA, AbC, and AbF are used in the hospital and the community compartments at proportions of 30%, 40%, and 30% and of 75%, 5%, and 20%, respectively. In the basic scenario, a single patient is treated with only one antibiotic that is administered every 6 h.
(vii) Intensity of the effect of antibiotics on susceptible clinical populations. After each dose is administered, all three (bactericidal) antibiotics induce a decrease of 30% in the susceptible population after the first hour of dose exposure and 15% in the second hour. These relatively modest bactericidal effects reflect the reduction in the antibiotic killing rates of clinical populations inserted into the colonic microbiota. The antibiotic stochastically penetrates at those percentages of bacterial cells, and those that are susceptible are removed (killed). Therapy is maintained in the treated individual for 7 days.
(viii) Intensity of the effect of antibiotics on colonic microbiota. Antibiotics exert an effect that reduces the density of the colonic commensal microbiota, resulting in free space and nutrients that can benefit the clinical populations. In the basic model, the levels of such reductions are 25% for AbA, 20% for AbC, and 10% for AbF. | 8,534.4 | 2018-04-23T00:00:00.000 | [
"Biology"
] |
Diffuse interstellar bands in Gaia DR3 RVS spectra New measurements based on machine learning ⋆
Diffuse interstellar bands (DIBs) are weak and broad interstellar absorption features in astronomical spectra that originate from unknown molecules. To measure DIBs in spectra of late-type stars more accurately and more efficiently, we developed a random forest model to isolate the DIB features from the stellar components. We applied this method to 780 thousand spectra collected by the Gaia Radial Velocity Spectrometer (RVS) that were published in the third data release (DR3). After subtracting the stellar components, we modeled the DIB at 8621Å ( λ 8621) with a Gaussian function and the DIB around 8648Å ( λ 8648) with a Lorentzian function. After quality control, we selected 7619 reliable measurements for DIB λ 8621. The equivalent width (EW) of DIB λ 8621 presented a moderate linear correlation with dust reddening, which was consistent with our previous measurements in Gaia DR3 and the newly focused product release. The rest-frame wavelength of DIB λ 8621 was updated as λ 0 = 8623 . 141 ± 0 . 030Å in vacuum, corresponding to 8620.766Å in air, which was determined by 77 DIB measurements toward the Galactic anticenter. The mean uncertainty of the fit central wave-length of these 77 measurements is 0.256Å. With the peak-finding method and a coarse analysis, DIB λ 8621 was found to correlate better with the neutral hydrogen than with the molecular hydrogen (represented by 12 CO J = (1 − 0) emission). We also obtained 179 reliable measurements of DIB λ 8648 in the RVS spectra of individual stars for the first time, further confirming this very broad DIB feature. Its EW and central wavelength presented a linear relation with those of DIB λ 8621. A rough estimation of λ 0 for DIB λ 8648 was 8646.31Å in vacuum, corresponding to 8643.93Å in air, assuming that the carriers of λ 8621 and λ 8648 are comoving. Finally, we confirmed the impact of stellar residuals on the DIB measurements in Gaia DR3, which led to a distortion of the DIB profile and a shift of the center ( ≲ 0.5Å), but the EW was consistent with our new measurements. With our measurements and analyses, we propose that the approach based on machine learning can be widely applied to measure DIBs in numerous spectra from spectroscopic surveys.
Introduction
Diffuse interstellar bands (DIBs) are a set of absorption features in the spectra of stars, galaxies, and quasars that are observed in the optical and near-infrared bands (about 0.4-2.4µm, see DIB surveys: Fan et al. 2019;Hamano et al. 2022;Ebenbichler et al. 2022).High-quality astronomical observations, experimental measurements, and theoretical analyses support that the DIBs originate from complex carbon-bearing molecules (e.g.Campbell et al. 2015;Omont et al. 2019;MacIsaac et al. 2022), so that DIBs can serve as chemical and kinematic tracers of the Galactic interstellar medium (ISM), even though the exact species of most DIB carriers are still unknown.
Because DIBs are weak features and might be blended with stellar lines, early studies preferred to observe a handful of hot stars as background sources because their spectra are clean.This Because late-type stars dominate the observations in spectroscopic surveys, synthetic spectra derived from the stellar atmospheric model and atomic line lists are needed to isolate the DIB signal from the stellar components.However, an incorrect modeling of stellar lines close to the DIB signal could introduce additional uncertainties in the DIB measurements.Moreover, when the stellar residuals are comparable to the DIB features in terms of strength, a pseudo-fitting is hard to distinguish and could lead to a bias in the measurements of DIB parameters.To overcome this limitation, Kos et al. (2013) developed a datadriven method, called the "best-neighbor method" (BNM), to build artificial stellar templates for the observed spectra in the vicinity of the DIB feature (DIB window).Specifically, BNM first separated the whole spectroscopic sample into a target sample (spectra containing DIB signals) and a reference sample (spectra without DIB signals).The reference sample typically constituted sources at high latitudes and with low dust extinctions according to the assumption that low extinction represents a low abundance of ISM species.Then, for a given target spectrum, BNM determined the best-matched reference spectra based on a pixel-by-pixel comparison for the spectral region outside the DIB window.Finally, a number of best-matched reference spectra (up to 25 in Kos et al. 2013) were averaged to create a stellar template for the DIB window.The ISM spectrum within the DIB window in which the DIB signal is detected and measured was defined as the target spectrum divided by the generated stellar template.The BNM has been applied to measure DIBs in the spectra from RAVE (Kos et al. 2013), Gaia RVS (Zhao et al. 2022; Gaia Collaboration 2023a, hereafter GFPR), and GALAH (Vogrinčič et al. 2023).Other types of data-driven methods have also been applied to detect DIBs.Saydjari et al. (2023, hereafter AS23) decomposed and recognized stellar components and DIBs in public Gaia RVS spectra with a data-driven prior consisting of ∼40 000 RVS low-extinction spectra.McKinnon et al. (2023) built second-order polynomial models of the normalized flux as a function of stellar parameters for ∼17 000 red clump stars observed in APOGEE and found 84 possible DIBs (25 identified with a confidence level of 95%, 10 of which were previously known) in the residuals between observed and modeled APOGEE spectra.
The third data release of Gaia (Gaia Collaboration 2023c) contains a large number of measurements for DIB λ8621 in about 500 000 RVS spectra of individual stars (Gaia Collaboration 2023b, hereafter GDR3).DIB λ8621 was fit in the ISM spectra derived by the synthetic spectra from the general stellar parameterizer from spectroscopy (GSP-Spec) module (Recio-Blanco et al. 2023).After removing the cases with poor stellar modeling and poor DIB parameters, we defined a high-quality (HQ) DIB sample containing ∼140 000 sightlines (see Sect. 3 in GDR3 for details).However, AS23 reported a dependence on the fit central wavelength and the Gaussian width of DIB λ8621 (see their Fig. 1) and attributed these biases to the residuals of stellar lines in the vicinity of DIB λ8621 (e.g., the Fe I lines at 8620.51 Å and 8623.97Å in vacuum wavelength determined by Contursi et al. 2021).In this work, we improve the BNM to a machine-learning (ML) approach that replaces the pixel-by-pixel comparison of the spectral flux in finding the best-matched reference spectra by ML training.The ML approach can directly predict stellar components in the DIB window for the target spectra instead of comparing them one by one with the reference spectra.Thus, the ML approach can speed up the process and ignore irrelevant features.Furthermore, Kos et al. (2013) down-weighted the Ca II regions in the comparison between target and reference spectra because the Ca II lines are too strong to overwhelm other stellar features, but the ML model does not need to adjust the weights of stellar lines.We applied the improved BNM to process 780 000 RVS spectra published in Gaia DR3 and measured DIB λ8621 as well as the broad DIB around 8648 Å (λ8648; Zhao et al. 2022).We performed some statistical analysis of the properties of these two DIBs based on a selected reliable sample.We compared our new measurements of DIB λ8621 to those in GDR3 and AS23 to estimate the degree of biases of DIB parameters in GDR3.We note that the BNM has been applied to RVS spectra in the new focused product release (FPR) of Gaia to measure DIBs λ8621 and λ8648 (see GFPR for detailed results), but the FPR only contained DIB measurements in stacked ISM spectra, and this work measured DIBs λ8621 and λ8648 in the RVS spectra of individual stars for the first time.
The paper is organized as follows: The data processing is described in Sect.
Data processing
There are 999 645 RVS spectra (R ∼ 11 500, mean spectra of epoch observations) published in Gaia DR3 (Gaia Collaboration 2023c) that can be accessed through the datalink interface of the Gaia Archive 1 .The published RVS spectra were processed by Gaia DPAC Coordination Unit 6 (CU6) and were equally resampled between 864 and 870 nm with a spacing of 0.01 nm (2400 wavelength bins; Sartoretti et al. 2018Sartoretti et al. , 2023)).The spectra were normalized and shifted to the rest frame as well.Following the process in Recio-Blanco et al. (2023) and GFPR, we rebinned RVS spectra from 2400 to 800 wavelength pixels, sampled every 0.03 nm, to increase the signal-to-noise ratio (S/N).The total wavelength range of RVS spectra used in this work is between 8471.2 and 8687.5 Å to ensure that no reference spectra contained nan-value fluxes.The DIB window is defined as 8600-8680 Å (267 wavelength pixels).
After combining with the calibrated distance catalog of Bailer-Jones et al. ( 2021), where we made use of their geometric distances, 996 900 sources were left.We calculated E(B − V) for these stars based on the Planck dust map (Planck Collaboration Int.XLVIII 2016) using the Python package dustmaps (Green 2018).The target sample contains 780 513 spectra with E(B − V) > 0.02 mag and an S /N > 20.The reference sample contains 36 622 spectra with E(B − V) ⩽ 0.02 mag, |b| ⩾ 30 • , and an S /N ⩾ 50.The higher threshold of the S/N for the reference sample was chosed to achieve a better training set.The density distribution in Galactic coordinates for the target and reference samples is shown in Fig. 1.Baron et al. (2015) reported the detection of DIB signals in dust-free regions at high 1 http://cdn.gea.esac.esa.int/Gaia/gdr3/Spectroscopy/rvs_mean_spectrum/ latitudes.These DIB signals in the reference spectra might introduce an offset or a bias when the DIB profiles are modeled in target spectra, but when we assume that DIBs like this only exist in a very small part of the reference sample, the ML model would treat them as irrelevant features and minimize their effect.Nevertheless, this problem cannot be quantified because we lack DIB maps that were built with hot stars without reference spectra.
Stellar templates built with a random forest model
Various supervised-learning algorithms can be applied to model the stellar lines in the DIB window.In this work, we built a model based on the random forest (RF) regression, which is an ensemble bagging method that combines a large number of decision trees (Breiman 2001).The RF model predicts the stellar template within the DIB window (8600-8680 Å) for a given target spectrum using the part outside the DIB window (i.e., 8471.2-8600 Å and 8680-8687.5 Å).The model construction and prediction of the RF algorithm were completed by the Python package scikit-learn (Pedregosa et al. 2011).
The reference sample was separated into three data sets: the training set containing 21 974 spectra (60%), the validation set containing 7324 spectra (20%), and the testing set containing 7324 spectra (20%).The distributions of the stellar atmospheric parameters (T eff , log g, [M/H]) from GSP-Spec (Recio-Blanco et al. 2023) of these sets, as well as the target set, are presented in Fig. 2. The reference sample (training, validation, and testing sets) has a similar coverage of stellar parameter space as the target sample, mainly covering the main sequence, subgiant and red giant branches, and an [M/H] region between -1 and 0.5 dex.
A199, page 3 of 22 Zhao, H., et al.: A&A, 683, A199 (2024) Metal-poor and extremely hot or cool stars are notably missing, but they only form a small fraction of the target sample.The stellar parameters are only used to present the space coverage, but were not used in our RF model.They are not necessary for the BNM either, but were always used to speed up the BNM process (e.g., Kos et al. 2013;Zhao et al. 2022;GFPR).
Two of the most important parameters for the RF model are the number of trees in the forest (n_estimators; NE) and the maximum depth of the tree (max_depth; MD).Therefore, we trained RF models using various NE ({20, 40, 60, 80, 100}) and MD ({2, 5, 10, 30, 50, 100}), keeping other parameters as default values in scikit-learn, and completed the model selection by the performance of the validation set.For each pair of NE and MD, we applied the trained RF model to predict the stellar components in the DIB window for each RVS spectrum in the validation set and calculated the residuals between the observed and modeled normalized fluxes at each wavelength pixel.The mean residuals of these RVS spectra as a function of the spectral wavelength for different pairs of NE and MD are shown in Fig. A.1 together with their standard deviations.For small NE and MD, structural residuals can be seen around the Ca II line at 8664.5 Å (Contursi et al. 2021), which means a poor modeling of this strong line.With the increase in NE and MD, the Ca II is better modeled and the standard deviation also becomes smaller.Furthermore, the degree of dispersion of the residuals becomes similar for NE ⩾ 60 and MD ⩾ 30.Their mean residuals are slightly smaller than zero, however, which might be caused by the imperfect normalization of the observed spectra.Some mean residuals with small NE and MD are very close to zero, but their dispersion is apparently stronger.Some weak structural residuals exist even for the maximum NE and MD, but they are only about 10 −4 .On the other hand, Fig. A.2 shows the mean of the absolute residuals (MAR), taken along the wavelength within the DIB window, of each RVS spectra in the validation set as a function of the spectral S/N.MAR decreases with the increase in S/N, presenting a strong dependence.The dependence breaks for S /N ≳ 300, where the dispersion of MAR represents the robustness of the RF model for different types of spectra.MAR would also dramatically increase for stars with extreme parameters.For NE ⩾ 20 and MD ⩾ 30, the distribution of MAR becomes similar and MAR is smaller than 0.01 for S /N ≳ 100.Because of the similar performance of the validation set for large NE and MD, we selected a final parameter pair of NE = 100 and MD = 50.We also trained RF models with larger NE and MD and found that the performance improvement was not significant.For NE = 200 and MD = 300, the mean and the standard deviation of the residuals is −0.46 and 117.58, which is only slightly better than those for NE = 100 and MD = 50.
With the selected NE = 100 and MD = 50, we applied the RF model to the testing set to evaluate its performance on the target sample.Because the S/N of 62% of the target spectra is lower than 50, we randomly selected 10 000 reference spectra with 20 ⩽ S /N < 50 and added them to the testing set (a total of 17 324 reference spectra).Figure 3 presents the residuals as a function of wavelength for each spectrum in the testing set, sorted by the spectral S/N.For S /N > 50, the distribution of residuals is generally uniform along the wavelength, which indicates a good modeling of the stellar lines.Only the residuals near the Ca II line (indicated by a dashed green line in Fig. 3) are more significant than the residuals in its vicinity.The performance of the RF model becomes worse for spectra with lower S/N, showing systematic differences between the observed spectra and the modeled stellar template and structural residuals near the stellar
Normalized Flux
Fig. 4. Four examples of the RF prediction within the DIB window.The black and red lines are the observed RVS spectra and the predicted stellar parameters, respectively.The blue line is the derived ISM spectrum with an offset of 0.3 for the normalized flux.Orange marks the masked region during the fittings.The Gaia source ID of these targets is indicated.Some typical stellar lines within the DIB window determined by Contursi et al. (2021) are marked as well.The DIB fitting to these ISM spectra is shown in Fig. 5.
lines.The systematic difference could be caused by the imperfect normalization for low-S/N RVS spectra.We applied a linear continuum in the DIB model (see Sect. 3.2) that can reduce this effect.
Figure 4 shows the stellar templates predicted by the RF model for four RVS spectra within the DIB window.Strong A199, page 4 of 22 Zhao, H., et al.: A&A, 683, A199 (2024) stellar lines, such as Fe I and Si I, are well modeled, and the feature of λ8621 is clearly visible in the derived ISM spectra.
The residuals near the center of Ca II increase slightly.In the last example (bottom), the RVS spectrum was not perfectly normalized, but the RF model can still predict the stellar components properly.Furthermore, the ISM spectrum is well fit by our DIB model with a linear continuum (see the bottom panel in Fig. 5).
Fitting DIBs in ISM spectra
With the trained RF model, ISM spectra can be obtained by the modeled stellar templates in the DIB window divided by the observed spectra for each RVS source in the target sample.
The S/N of the ISM spectra was calculated between 8602 and 8612 Å as mean(flux)/std(flux).Following our previous works (Zhao et al. 2022; GFPR), we modeled the profiles of the two DIBs in the ISM spectra with a Gaussian function (Eq.( 1)) for λ8621, a Lorentzian function (Eq.( 2)) for λ8648, and a linear function for the continuum (Eq.( 3)), where D and σ DIB are the depth and width of the DIB profile, λ DIB is the measured central wavelength, a 0 and a 1 describe the linear continuum, and λ is the wavelength.Subscripts 8621 and 8648 are used below to distinguish the profile parameters of the two DIBs.The full parameters of the DIB model are Θ = {D 8621 , λ 8621 , σ 8621 , D 8648 , λ 8648 , σ 8648 , a 0 , a 1 }.A Markov chain Monte Carlo (MCMC) procedure (Foreman-Mackey et al. 2013) was performed to implement the parameter optimization with flat and independent priors for the DIB parameters.The best estimates of the DIB parameters and their statistical uncertainties were taken in terms of the 50th, 16th, and 84th percentiles of the posterior distribution drawn by the MCMC procedure.We refer to Sect.3.3 in GFPR for a very detailed description of the DIB model, priors, and the MCMC fitting procedure.We continued to use the masked region between 8660 and 8668 Å during the fitting, although the RF algorithm modeled the Ca II line much better than BNM.We note that the uncertainties of the ISM spectra used in the MCMC fitting only include the observational flux errors of the RVS spectra because the RF model cannot estimate the uncertainty of their predictions, so that the total uncertainties would be underestimated.According to Eqs. ( 1) and ( 2 Four examples of the DIB fits are shown in Fig. 5, whose ISM spectra are sorted by E(BP − RP) calculated by Andrae et al. (2023).EW 8621 and EW 8648 both increase with E(BP − RP).The profile of DIB λ8621 is prominent in all the ISM spectra, while the profile of DIB λ8648 is much more shallow and broader than that of λ8621.Because of the very small D 8648 , it is much more difficult to measure λ8648 than λ8621 in the ISM spectra derived from the individual RVS spectra.Additionally, the masked region, in which the residual of the Ca II line is clear, also affects the fit to the red wing of the λ8648 profile.
We performed an injection test following the principles in AS23.The details and discussions are presented in the Appendix B. In summary, the distribution of the Z-scores as a function of the injected DIB parameters and the S/N of the ISM spectra is perfectly uniform, which is highly consistent with the findings in AS23, primarily validating our RF model and the DIB fittings.
Selecting a reliable DIB catalog
The DIB fitting was performed for 780 513 ISM spectra derived from the target sample.To select reliable measurements, we calculated the χ 2 dof = χ 2 tot /d.o.f,where χ 2 tot is the total χ 2 between the fit DIB profile and the ISM spectrum, and dof = 259 is the degree of freedom of the DIB model (267 wavelength pixels and eight DIB parameters).We applied a cut on 0.71 < χ 2 dof < 1.41.The borders were applied by AS23 based on their injection test.Generally, stricter borders provide more accurate measurements to some extent, but also lose more cases.We tried other borders and found that 0.71 and 1.41 were proper borders to exclude many of the noisy cases with pseudo-fittings.We further required A199, page 5 of 22 (Virtanen et al. 2020).Our catalog contains a tail toward small σ 8621 (≲1 Å), and λ 8621 of these cases seems to be affected by the Fe I line.This is because we set an initial guess of σ 8621 = 1.2 Å in the MCMC fitting for all the cases so that noisy ISM spectra with weak DIB features would obtain a fit σ 8621 around this initial guess.On the other hand, AS23 contains more cases with large σ 8621 (≳3 Å) than our catalog, and their λ 8621 there is more scattered as well.The median σ 8621 in our catalog is 2.13 Å which is slightly larger than that in AS23 (1.92 Å).This may be due to the different pixel sizes of RVS spectra used in this work (0.3 Å pixel −1 )3 and in AS23 (0.1 Å pixel −1 ).We further applied cuts to constrain λ 8621 between 8620 and 8626 Å and σ 8621 within 1-4 Å.Hence, the final selected DIB catalog contains 7619 measurements.This DIB catalog can be accessed via the CDS database in Strasbourg, France (Ochsenbein et al. 2000).
The reliable DIB catalog in this work was constructed only by simple cuts on χ2 , noise level (S/N and R C ), and the DIB parameters (λ 8621 and σ 8621 ).It presents a Gaussian-like σ DIB − λ DIB distribution without any significant impacts of stellar lines and shows a good correlation between EW 8621 and the dust reddening (see Sect. 4.2).The catalog certainly contains pseudofittings, such as the outliers seen in the σ DIB − λ DIB diagram, but they should only take a very small part of the catalog after the quality control and only have little impact on the statistical analysis of the DIB properties.Furthermore, as DIBs are weak features, investigation of specific fittings would need a visual inspection of their ISM spectra.Figure 7 distribution and are concentrated in the Galactic midplane and some prominent molecular regions, with a remarkable extension to high latitudes in the directions of the Galactic center (GC) and anticenter (GAC).Like dust reddening, the large mean EW 8621 focuses on the Galactic plane with |b| ≲ 5 • and decreases on average with the increase in latitude.
To validate the DIB catalog, we compared the fit and integrated EW of DIB λ8621 for all the 7619 measurements.Because the profiles of λ8621 and λ8648 might overlap, the fit profile of λ8648 was first subtracted from the ISM spectra normalized by the fit linear continuum, and then the remaining part within λ 8621 ± 3σ 8621 was integrated.Figure 8 shows the comparison between fit and integrated EW 8621 , as well as the EW difference (∆EW = EW fit − EW int ) as a function of the fit EW 8621 .The fit and integrated EW 8621 are highly consistent with each other, with a mean difference of only 0.001 Å and a standard deviation of 0.016 Å. ∆EW is smaller than the uncertainty of EW 8621 (0.031 Å on average) for over 96% of the measurements, and 90% of ∆EW is smaller than 0.023 Å.The difference between fit and integrated EW 8621 tends to increase for measurements with large EW 8621 .These measurements were made in ISM spectra with generally lower S/N, where the residuals of the stellar lines would dramatically increase (the RF model performs worse with low S/N; see Fig . A.2) and consequently lead to an increase in the ∆EW.We checked some ISM spectra with large EW 8621 and ∆EW and found structural features in addition to the DIB signal.These features are more likely from the stellar residuals than the possible Doppler splitting caused by multiple ISM clouds along the sightlines because they can be far away from the center of the DIB profile and usually have a much smaller depth than λ8621.Despite the heavier influence of the stellar residuals and noise, the relative EW uncertainty does not increase for large EW 8621 .For EW 8621 > 0.5 Å (626 measurements), the fractional error of the fit EW 8621 is mainly (99.4%) within 20%, with a mean of 11.9%, and ∆EW is mainly (99.4%) smaller than 10% of the fit EW 8621 .The Doppler broadening caused by the unresolved multiple DIB components and the probable intrinsic asymmetry of the DIB profile may contribute to ∆EW as well.However, the S/N of the ISM spectra in this work is not high enough to distinguish these effects from the others.
DIB λ8621 and dust reddening
Both DIB EW and dust reddening can be used to map the spatial distribution of ISM species and the Galactic large-scale structures, but the EW generally has a much larger relative uncertainty than reddening at present.A tight linear correlation between DIB EW and dust reddening has been discovered for a set of strong DIBs with early-type stars as background sources (e.g.Munari et al. 2008;Friedman et al. 2011;Lan et al. 2015) despite the inevitable scatters and outliers (see the review of Krełowski 2018).On the other hand, the degree of dispersion between DIB EW and dust reddening usually increases by an order of magnitude for a spectroscopic survey data set that is dominated by late-type stars (see, e.g., Kos et al. 2013;Zasowski et al. 2015;GDR3).The relatively lower S/N of the survey spectra (compared to the specifically designed DIB observations) and the difficulty in modeling the atmospheric components of latetype stars certainly contribute to increase the dispersion in the DIB-dust correlation.However, the numerous observations in the survey should contain some sightlines in which the DIB carriers and dust grains are not spatially associated with each other because the dust grain is currently not considered a candidate DIB carrier (Cox et al. 2007(Cox et al. , 2011)).
We reviewed the correlation between DIB λ8621 and dust reddening with our new DIB measurements and two sources of reddening.Our DIB catalog contains 2957 cases with E(BP − RP) from Andrae et al. ( 2023) and 4656 cases with A V from Green et al. (2019).The best-estimated E(BP − RP) and its lower and upper confidence levels for the target stars were accessed via the Gaia archive.A V and its uncertainty were obtained with the bayestar module in dustmaps using the percentile mode.A V equals 2.742 times the reddening unit given by bayestar (Green et al. 2019).The scatter plot between EW 8621 and dust reddening is shown in Fig. 9 for E(BP − RP) (upper panel) and A V (lower panel), with the median values and standard deviations taken in each EW 8621 bin with a step of 0.05 Å (red dots).The median dots present a good linear relation between EW 8621 and dust reddening for both E(BP − RP) and A V for EW 8621 ≲ 0.5 Å, with a deviation at larger EW 8621 .AS23 reported larger A V than expected when the median dots deviated from their linear fit to EW 8621 and A V , but conversely, the median A V becomes smaller than expected in our work in these regions.This is only because the median dots were taken in EW 8621 bins in this work ,but in A V bins in AS23.We checked a part of the outliers, for example, with high reddening but small EW 8621 , and found that many of the DIB measurements have proper DIB parameters, and their ISM spectra clearly contain DIB signals by a visual inspection.This verifies that the DIB and dust are not required to appear together.They could statistically present a linear relation only due to the accumulation of different ISM species along the sightline.Therefore, DIB EW may not be a good proxy for dust reddening in specific directions, and their ratio also varies with the investigated samples.
The linear fits to EW 8621 and dust reddening performed in previous works are also plotted as dashed lines in Fig. 9.For E(BP − RP), the tendency of the median dots is consistent A199, page 7 of 22 with the fit line of GDR3 (black) and of GFPR (magenta).Furthermore, the standard deviation of the individual measurements in each EW 8621 bin is much larger than the difference between GDR3 and GFPR.For A V , the median dots are closer to the line of GDR3 (red) than that of AS23 (blue).AS23 obtained a fit EW 8621 /A V = 0.106 ± 0.017 Å mag −1 , corresponding to 3.448 mag Å −1 of E(B − V)/EW 8621 , which is 57% larger than the value fit in GDR3 (2.198 mag Å −1 ).This difference is mainly caused by the systematic difference in EW 8621 (see Fig. 14), but is not the control of the bias and uncertainties argued by AS23.The E(B − V)/EW 8621 ratio derived in different works (e.g.Wallerstein et al. 2007;Munari et al. 2008;Kos et al. 2013;Puspitarini et al. 2015) has a 20% difference on average (see Table 3 in GDR3).The result of AS23 is similar to that in Zhao et al. (2021), who used the Gaia-ESO (Gilmore et al. 2012) data set and E(B − V) from Schlegel et al. (1998).We emphasize that the mean correlations between EW 8621 and dust reddening derived in our series using the Gaia RVS data (GDR3, Zhao et al. 2022, GFPR) are consistent with each other within 10% (see also the discussions in Sect.5.2 in GFPR).
Kinematics of DIB λ8621
To study the kinematics of the carrier of DIB λ8621, it is most fundamental and important to determine its rest-frame wavelength (λ 0 ), which is also required to identify the nature of λ8621 through the comparison to the laboratory measurements.In this work, we followed the statistical method, which assumes that the DIB radial velocity is null toward the GAC in a circular rotational obit, and thus the intercept at ℓ = 180 • would indicate λ 0 (see Zasowski et al. 2015;GDR3;AS23).
As the published RVS spectra have been shifted in the stellar frame where stellar lines are at their rest positions but the DIB features are additionally shifted, we converted λ 8621 into the heliocentric frame (λ helio ) using the radial velocity of stars (V star ) determined in Gaia DR3 (Katz et al. 2023).We selected 77 DIB measurements in our DIB catalog with |b| < 5 • , 170 • < ℓ < 190 • , d ⩽ 3 kpc, err(λ 8621 ) < 0.5 Å, and err(V star ) < 5 km s −1 .The information of their background stars, including the Gaia-DR3 source ID, Galactic coordinates, apparent G magnitudes, and stellar atmospheric parameters from GSP-Spec, are listed in Table C.1, as well as their λ helio .We note that some cases seem to be early-type stars without GSP-Spec estimates of their stellar parameters.Nevertheless, by visual inspection, our RF model, which does not rely on stellar parameters, also works well for these spectra, even though the RVS sample is dominated by late-type stars.Figure 10 presents the slightly linear trend of λ helio around the GAC, reflecting the projection of the galactocentric rotation of the DIB carrier (Zasowski et al. 2015).Some tiny deviations of λ helio from the linear trend can be seen.In addition to the fitting uncertainty of λ 8621 , the turbulent motion in the DIB cloud and the possible physical changes of DIB shapes and positions (see e.g.Galazutdinov et al. 2008;Krełowski et al. 2021) probbably also contribute.The applied statistical method might reduce these effects if no strong systematic deviations existed.A least-square linear fit to λ helio and the angular departure from the GAC (∆ℓ) obtained an intercept of 8623.446 ± 0.030 Å.The uncertainty was estimated by a 2000 times Monte Carlo simulation according to the fit uncertainty of λ 8621 .We note that the mean error of λ 8621 from the MCMC fitting is 0.256 Å for the 77 selected DIB measurements, which is larger than the statistical uncertainty by an order of magnitude.A factor of c/(c + U ⊙ ) was used to correct for the effect of solar motion, where c is the speed of light and U ⊙ = 10.6 km s −1 (Reid et al. 2019) is the radial solar motion toward the GC.Finally, we obtained a λ 0 = 8623.141± 0.030 Å for DIB λ8621, which is perfectly consistent with the result of AS23 (8623.14 ± 0.087 Å), although we did not consider the distance calibration proposed in AS23.This value is nevertheless lower than that of GDR3 (8623.23 ± 0.019 Å) by 3.0σ using A199, page 8 of 22 our uncertainty for λ 0 (0.030 Å).Our derived λ 0 corresponds to 8620.766Å in air wavelength, which is consistent with most of the literature results within 2σ, such as 8620.7 Å of Sanner et al. (1978), 8620.75Å of Herbig & Leka (1991), 8620.79Å of Galazutdinov et al. (2000), 8620.7 Å of Munari et al. (2008, after the correction of the solar motion), and 8620.83Å of Zhao et al. (2021).
We applied the same selection criteria to 9763 cases in the HQ DIB catalog in GDR3 measured in the RVS spectra that are published in Gaia DR3 and obtained 67 DIB measurements.The derived λ 0 is 8623.368± 0.037 Å , which is even larger than the result of GDR3 with 0.14 Å (3.8σ).This result suggests that compared to the full Gaia RVS data set, the use of the public sample in DR3 would lead to a redshift of λ 0 .Therefore, the smaller λ 0 determined in this work and in AS23 is not caused by the selection bias of the sample, but by the systematic difference of λ 8621 .As pointed out by AS23, λ 8621 in GDR3 would be affected by the improperly modeled stellar lines.On the other hand, it is not clear whether the λ 0 derived in this work and AS23 with the public RVS sample is also redder than the true value.We expect that this problem could be answered by the following analysis of GFPR or the DIB measurements in Gaia DR4.
With the derived λ 0 , we calculated the radial velocity of the carrier of DIB λ8621 (V DIB ) in the local standard of rest (LSR) 4 .We selected 3592 DIB measurements at low latitudes (|b| < 5 • ) and with accurate λ 8621 (err(λ 8621 ) < 0.5 Å) and V star (err(V star ) < 5 km s −1 ) to present the variation in V DIB along with the Galactic longitude.The rotation of the DIB λ8621 carrier is clearly seen in the upper panel of Fig. 11, overlaid with the theoretical rotation curves with different distances from the Sun.Specifically, for a distance from the Sun (d), the galactocentric distance is calculated as , where R 0 = 8.15 kpc is the galactocentric distance of the Sun.Then the circular velocity (Θ) is predicted by model A5 in Reid et al. (2019) with R GC and ℓ, assuming b = 0. Finally, the radial velocity for a given d and where Θ 0 = 236 km s −1 is the circular velocity of the Sun.
When we consider the median V DIB in each ∆ℓ = 10 • bin (red dots in Fig. 11), the carrier of λ8621 in the selected sample is mainly located within a kinematic distance of 2 kpc from the Sun, although the velocities from individual DIB measurements are much more scattered.This is a reasonable interpretation based on inspecting the mean distances to the background stars of these DIB signals, which are all larger than 2 kpc with a minimum of 2.3 kpc.Moreover, the DIB carriers toward the GAC have a larger distance on average than those toward the GC.In the lower panel in Fig. 11, the median V DIB is compared to the longitude-velocity map of 12 CO J = (1−0) emission from Dame et al. (2001).We made use of the momentum-masked cube restricted to a latitude range of ±5 •5 .The median V DIB follows the 12 CO velocity curve in the local region, especially from ℓ ≈ −90 • to ℓ ≈ −180 • .The average V DIB deviation in each longitude bin is 13.8 km s −1 , which prevents the exploration of a finer relation between DIB λ8621 and 12 CO in velocity structures.Nevertheless, more DIBs with large V DIB in general can be found in the regions with high-velocity 12 CO emission.For instance, the 12 CO velocities between 90 • and 180 • concentrate in two main branches that can be interpreted as the Local and 4 The convention between the heliocentric frame and LSR is made with (U ⊙ , V ⊙ , W ⊙ ) = ( 10.6, 10.7, 7.6) Perseus spiral arms (e.g.Reid et al. 2019).Although AS23 suggested that some DIB measurements coincide with the 12 CO emission in the Perseus arm, the density distribution of V DIB there did not bifurcate.These cases consequently only represent a very small percentage of the total.For ℓ ≈ −60 • to 0 • , V DIB coincides well with some discrete 12 CO emission at −20 to −10 km s −1 , suggesting that these DIB signals would originate in the Local arm.
Detection of DIB λ8648 in individual RVS spectra
The DIB signal around 8648 Å was first reported and measured by Sanner et al. (1978), with positive support from Herbig & Leka (1991), Jenniskens & Desert (1994), Wallerstein et al. (2007), and Munari et al. (2008), but it was missed in the DIB survey of Galazutdinov et al. (2000) and Fan et al. (2019).These inconsistent results could be due to the difficulties in measuring a weak and broad DIB feature.The high-resolution spectra of early-type stars used in these previous studies would introduce uncertainties in the continuum placement when measuring broad DIBs like this (Sonnentrucker et al. 2018).Furthermore, stellar lines would cause contamination even for early-type stars, such as the very strong Paschen 13 line (see Fig. 1 in Munari et al. 2008) and the He I line at 8648.3 Å reported by Krełowski et al. (2019) in a B-type star (HD 169454).Under these effects, any conclusions about this DIB signal could be distorted in case studies with early-type stars.Compared to early studies, the large number of Gaia RVS spectra allows us to systematically investigate this signal with A199, page 9 of 22 a much larger spatial coverage.Based on the BNM, we successfully modeled the stellar components of late-type stars and detected the DIB signal near 8648 Å in stacked RVS spectra in Zhao et al. 2022 and GFPR.We cite this DIB as λ8648 following the suggestion in Jenniskens & Desert (1994), but we obtained a smaller λ 0 as 8645.3 ± 1.4 Å in Zhao et al. (2022).The profile of λ8648 was found to be very shallow and broad.In this work, we further verified that DIB λ8648 can be detected in individual RVS spectra, although their S/N are much lower than those after stacking.Figure 5 already shows the clear profile of λ8648 in four ISM spectra.In this section, we selected 179 measurements from the DIB catalog for a statistical study of λ8648 and its correlation with λ8621, with strict criteria: D 8648 > 3R C , 8640 < λ 8648 < 8655 Å, 8 < σ 8648 < 12 Å, and S /N > 50.
The high consistency between the fit and integrated EW 8648 (Fig. 12a) demonstrates that the DIB profile of these cases was properly fit.The general decrease of S/N for large EW 8648 caused the slight deviation.It should be noted that the fit and integrated EW 8648 shown in Fig. 12a were simply calculated outside the masked region between 8660 and 8668 Å where Ca II residuals would exist (see, e.g., Fig. 5).This means that they are smaller than the EW 8648 shown in Fig. 12b that was directly calculated based on the fit DIB parameters.
EW 8621 and EW 8648 present a linear correlation with a Pearson coefficient (r p ) of 0.82 (Fig. 12b).However, the EW 8621 /EW 8648 ratio in this work is systematically lower than that in GFPR, especially for small EW 8648 .The cause could be a detection bias due to the limited S/N of individual RVS spectra.For D 8648 is only about one-third of D 8621 , weak λ8648 is harder to be detected at a given magnitude of S/N than weak λ8621, resulting in a lack of measurements with large EW 8621 /EW 8648 .This effect should be less pronounced for GFPR because the S/N of the stacked ISM spectra was far higher.On the other hand, after visually checking the extreme measurements with EW 8648 ≳ 0.8 Å but EW 8621 ≲ 0.2 Å, we found that the jagged noise within the very broad profile of λ8648 could lead to an overestimation of EW 8648 .There is another possibility.If the carriers of λ8621 and λ8648 have different spatial distributions and the λ8648 carrier is more compact, the stacking of ISM spectra in a 3D volume will obtain a smaller EW 8648 (lower mean abundance) than EW 8621 .Nevertheless, without a map of their distribution, we cannot analyze the extent of this effect.Additionally, EW 8621 /EW 8648 would also vary from one sightline to the next and present different relations with different samples.
The measured central wavelength of the two DIBs also presents a moderate linear relation with r p = 0.68 (Fig. 12c), but their FWHM is noise dominated, especially for λ8648.With a linear fit to λ 8621 and λ 8648 , we made a rough estimate of λ 0 for λ8648 as 8646.31Å in vacuum, assuming that the carriers of λ8621 and λ8648 are comoving.This value corresponds to 8643.93 Å in air, which is much smaller than previous suggestions, such as 8650 Å by Sanner et al. (1978), 8648.28Å by Jenniskens & Desert (1994), and 8649 Å by Herbig & Leka (1991).The difference between this result and Zhao et al. ( 2022) is 1.01 Å, slightly larger than the mean uncertainty of λ 8648 of the cases we used of 0.69 Å. Figure 13 shows the correlation between EW 8648 and A V from Green et al. (2019) for 93 cases (others are beyond the sky coverage of Green et al. 2019).A moderate linear correlation can be found with r p = 0.70.Compared to λ8621, DIB λ8648 presents a worse correlation with dust reddening, which has been noted in Zhao et al. (2022) and GFPR.Similar to EW 8621 /EW 8648 , the A V /EW 8648 ratio in this work is systematically lower than that in GFPR.
Reassessing the DIB measurements in Gaia DR3
When we consider the small distances to the background stars (the median distance is 1.31 kpc) and the moderate S/N of the RVS spectra (the median S/N is 115.3) for the HQ sample of GDR3, its measured λ 8621 and σ 8621 should present a quasi-Gaussian distribution centered on λ 0 and the mean Gaussian width with a dispersion due to the uncertainties and the Galactic rotation (like the distribution seen in Fig. 6 for this work and AS23).However, a strong dependence between λ 8621 and σ 8621 A199, page 10 of 22 is clearly seen for the HQ sample of GDR3 (see Fig. 1 in AS23), which would be attributed to the improperly modeled stellar lines.On one hand, for small σ 8621 (≲1 Å) and large λ 8621 (around 8624-8626 Å), the fittings might be purely pseudofittings because the residuals of Fe I lines there would be stronger than the DIB features.On the other hand, the increase in σ 8621 with λ 8621 shifting shortward (8622-8624 Å) implies a broadening of the DIB profile caused by the noise and stellar residuals as more stellar lines at shorter wavelength in the vicinity of the DIB signal.
Compared to GDR3, the ISM spectra derived by the datadriven methods in this work and AS23 are less strongly influenced by the stellar residuals.Therefore, to estimate the magnitude of the biases caused by the stellar residuals, we compared the DIB parameters from GDR3 to those from this work and AS23 fit in the same RVS spectra, specifically, 1518 cases between GDR3 and this work and 3167 cases between GDR3 and AS23, as well as 2000 cases between this work and AS23 as a control group.We only considered the highest level of the DIB quality flag in GDR3 (i.e., QF = 0; see Sect. 2 in GDR3 and Sect.8.9 in Recio-Blanco et al. 2023 for details).The differences in DIB parameters as a function of the fit values are shown in Fig. 14, and their statistics, including the median difference (MED), the root-mean-square difference (RMSD), and the absolute difference not exceeded by 90% of the sources (AD90), are presented in Table 1.The D 8621 for AS23 (not given in their catalog) was calculated by their λ 8621 and EW 8621 with a Gaussian function.
The impact of the stellar residuals causes a systematic shift of λ 8621 in GDR3, which is clearly seen as the systematic variation of ∆λ 8621 with λ 8621 in Fig. 14.This phenomenon coincides with the λ 8621 − σ 8621 dependence discussed above.The MED of ∆λ 8621 is much larger for this work (0.073 Å) than for AS23 (0.018 Å), but the RMSD is similar (0.35 Å) and close to the pixel size, corresponding to 12 km s −1 .This value is comparable to the mean uncertainty of λ 8621 (0.376 Å) in GDR3 for the joint samples as well.When we consider AD90, the maximum shift for most λ 8621 is about 0.56 Å (∼19 km s −1 ), 1.5 times larger than the mean uncertainty.As a comparison, λ 8621 in this work and in AS23 are highly consistent with each other, with a median difference of only 0.008 Å and with a halved RMSD and AD90.Nevertheless, ∆λ 8621 between AS23 and this work also presents a weak dependence on λ 8621 , which could be due to a weak stellar impact, even though most ∆λ 8621 are smaller than the RVS wavelength pixel size used in this work.
The overestimated σ 8621 in GDR3 has a MED of 0.164 Å compared to this work and tends to become larger with increasing σ 8621 .The RMSD (0.469 Å) is slightly larger than the mean uncertainty of σ 8621 in GDR3 (0.405 Å) and the AD90 reaches over 0.7 Å.As a comparison, σ 8621 measured in this work is larger than that in AS23, with a nearly constant difference (a MED of −0.293 Å).In addition to the overestimated σ 8621 , D 8621 in GDR3 is smaller than that in this work, which is different, and ∆D 8621 presents an increasing trend with D 8621 as well.Additionally, EW 8621 are highly consistent with each other between GDR3 and this work, with a MED of −0.002 Å and a RMSD of 0.030 Å, comparable to the mean EW uncertainty (0.024 Å) in GDR3.Overall, we propose that the impact of stellar residuals led to a distortion of the DIB profile in GDR3, which became slightly shallower and broader.The center of the profile was also shifted to one or two pixels at most for the joint sample, but the area of the profile (EW 8621 ) remained unchanged.
The EW 8621 in this work is systematically larger than that in AS23, and the difference increases further with the fit values and can reach around 0.1 Å for EW 8621 ∼ 0.3 Å.The mean ∆EW 8621 is 27.1% relative to our measurements, much larger than the mean uncertainty of EW 8621 (12.9%).Because AS23 and this work have consistent λ 8621 and nearly constant ∆σ 8621 , the rise in ∆EW 8621 might be caused by different ML algorithms that model the DIB depth in different ways.Moreover, AS23 modeled the profile of λ8621 with a Gaussian function, but we added a Lorentzian function for λ8648 and a linear continuum accounting for RVS spectra with a poor normalization.Nevertheless, we note that GDR3 made use of synthetic spectra from stellar models and a simple Gaussian fitting, but their EW 8621 is highly consistent with that in this work.Thus, the influence of the DIB model is probably not so significant.The last factor is the fitting technique.Specifically, the ISM spectrum was first derived in A199, page 11 of 22 this work, and then the DIB profile was modeled, while in AS23, the DIB profile was implemented as a pixel-by-pixel covariance matrix, together with the stellar components and the noise, and was optimized in a set of grids.Despite the systematic difference in EW 8621 , the span of the 16th to the 84th percentiles of ∆EW 8621 (a measure of the magnitude of the dispersion deducting the tendency) is similar to that between GDR3 and this work and that between AS23 and this work for EW 8621 ≲ 0.3 Å.
DIB λ8621 correlates better with neutral than with molecular hydrogen
Motivated by the good consistency and the multimodality found between DIB λ8621 and 12 CO in velocity structures, AS23 directly compared V DIB and V CO by a peak-finding method.Specifically, signals of λ8621 and 12 CO were simply matched by the position of the background stars and the space grid of 12 CO map (a resolution of 0.125 • for Dame et al. 2001).
For any detected 12 CO emission within 1σ of V DIB , V DIB was then compared to the intensity-weighted V CO calculated within nine velocity channels around V DIB (see Sect. 3.4.2and Appendix E in AS23 for details).With a linear fit restricted velocity range and contains multiple components that cannot be resolved in the RVS spectra.With the limited accuracy of V DIB and the strong bias of the peak-finding method, it is hard to conclude that the perfect association between 12 CO and the carrier of DIB λ8621 implies a clumpiness of the DIB carrier.The associated velocity between DIB λ8621 and 12 CO, as well as H I, is more likely a result of the general Galactic rotation of these gaseous ISM species at a similar distance.
Based on the velocity-matched DIB − 12 CO and DIB -H I pairs, we made a coarse investigation of the correlation between the DIB strength and the hydrogen abundance for both neutral hydrogen (N H I ) and molecular hydrogen (N H 2 ).We calculated their abundance as N H I = 1.823 × 10 18 × I H I (HI4PI Collaboration 2016) and N H 2 = 2 × 10 20 × W CO (Bolatto et al. 2013), where I H I and W CO are the velocity-integrated intensity calculated with nine velocity channels around their matched V DIB .This analysis was based on the assumption that ISM species with similar radial velocities are mainly located at a similar distance, so that the DIB features can be compared to the corresponding H I and 12 CO emission, with a narrow-range integration to deduct the foreground and background contamination.This is certainly an ideal assumption.As shown in Fig. 16 on a logarithmic scale, a moderate linear correlation is EW 8621 and N H I (r p = 0.74), while EW 8621 is not sensitive to N H 2 (r p = 0.21).Therefore, the carrier of λ8621 would correlate much better with neutral hydrogen than with molecular hydrogen.Although EW 8621 is proportional to the column density of the carrier only between the background star and us and the H I and 12 CO observations may trace the hydrogen abundance in a much wider distance range, the narrow integration range around V DIB seems to alleviate this influence.
A set of strong optical DIBs was reported to tightly correlate with N H I , but to only present a loose correlation with N H 2 when N H 2 > 10 20 cm −2 (e.g.Herbig 1993;Friedman et al. 2011;Lan et al. 2015).The relation between EW 8621 and N H I revealed by our coarse analysis corresponds to this inference.In particular, Friedman et al. (2011) derived a tight correlation between DIB λ5780 and N H I , and the range of N H I , where they found that the correlation (20 ∼ 21.5 cm −2 ) is similar to ours (see the dashed black line in the upper panel of Fig. 16).According to Fan et al. (2019), the E(B − V)-normalized EW of λ5780 is twice as high as that of λ8621.Hence, the larger EW 8621 than EW 5780 at a given N H I seen in Fig. 16 would be caused by an underestimation of N H I in our analysis due to the narrow integration range.Nevertheless, we did not find a loose correlation between EW 8621 and N H 2 even for N H 2 > 10 20 cm −2 , although the fit line of the EW 5780 − N H 2 relation in Friedman et al. (2011) crosses the highest-density region of our sample for log[EW 8621 ] A199, page 13 of 22 and log[N H 2 ] (see the dashed black line in the lower panel of Fig. 16).The possible variation in the X CO factor and the saturation problem of 12 CO would further hamper the investigation of the correlation between EW 8621 and N H 2 .
Completeness of the DIB catalog
With he injection test, AS23 estimated the completeness of their DIB catalog by selecting good measurements with ∆D < 20%, ∆σ DIB < 20%, and ∆λ DIB < 2.5 Å (differences between the fit and injected values; see Appendix F in AS23).For our selected DIB catalog, the mean uncertainty of D 8621 and σ 8621 is about 10% and the ∆λ 8621 is mainly within 0.5 between this work and GDR3 (see Table 1 and Sect.4.5).Hence, we estimated the completeness of our DIB catalog with more rigorous criteria, that is, ∆D < 10%, ∆σ DIB < 10%, and ∆λ DIB < 0.5 Å, based on the results of the injection test (see Appendix B).The distribution of the estimated completeness for DIB λ8621 as a function of the injected EW 8621 and σ 8621 (upper panel in Fig. 17) is similar to that in AS23.The completeness generally decreases with EW 8621 , but its variation with σ 8621 is not as clear as in AS23 because the sample we used in the injection test is smaller.In our DIB catalog, the median EW 8621 is 0.2 Å and its 16th and 84th percentiles are 0.1 and 0.4 Å.The mean completeness at EW 8621 ∼ 0.2 Å is about 65% and between 0.1 and 0.4 Å is 68%.This is an optimistic estimate because the injection test was simple and ideal.Nevertheless, this percentile is still much higher than the fraction of the joint sample to the total between this work and AS23 (∼25%), which could be a result of the selection bias on the DIB catalog (see Appendix D for a detailed discussion).
The estimated completeness for DIB λ8648 depends much less on EW 8648 and σ 8648 , indicating a stronger influence of the correlated noise and stellar residuals on the completeness of DIB λ8648 measurements.At EW 8648 ∼ 0.5 Å, the mean completeness is only 25%.If we were to use loose criteria, the same as those in AS23, the completeness would increase to 53%.Although the total number of the spectra in the target sample that truly contain DIB λ8648 signals is unknown, we apparently did not obtain enough DIB λ8648 measurements (only 179 after quality filtering) even in the selected DIB catalog.One possible reason is that the selection criteria are too rigorous for DIB λ8648.Another reason is that DIB λ8648 does not exist in every spectrum with DIB λ8621 signals, that is, fewer DIB λ8648 sightlines can be detected than that for DIB λ8621.The completeness for DIB λ8648 would be overestimated as well.
Based on the estimated completeness and the comparison to AS23 (Appendix D), the DIB catalog in this work would be very pure, but only little complete.It is a critical problem to increase both the purity and the completeness of the DIB catalog.However, our RF model cannot estimate the uncertainty of its predictions meaningfully, and this is a common failure for the ML approach.We will seek more models and more intelligent injection tests or simulations in following works.
Summary and conclusions
We developed a random forest model for building the stellar templates within the DIB window (8600-8680 Å) using the part of the spectra outside the window.This method can be treated as an improvement of the best-neighbor method developed in Kos et al. (2013) and was applied to the RVS spectra published in Gaia DR3.The training set constituted 21 974 spectra with E(B − V) ⩽ 0.02 mag, |b| ⩾ 30 • , and S /N > 50.After subtracting the stellar components by the generated templates, we fit DIB λ8621 by a Gaussian function and DIB λ8648 by a Lorentzian function as well as a linear continuum for 780 513 target spectra.These target spectra had a mean S/N of 58, and 90% of the spectral S/N was below 116.The mean distance of the background stars is 1.58 kpc, and 90% of them are located within 3.61 kpc.
Considering χ 2 dof , noise level (S/N and R C ), and the constraints on λ 8621 and σ 8621 , we selected 7619 reliable measurements for DIB λ8621 (the DIB catalog can be accessed via the CDS database).Their EW 8621 presented a moderate linear correlation with dust reddening from both Andrae et al. (2023) and Green et al. (2019), and the mean EW 8621 /A V ratio was consistent with our previous results in GDR3 and GFPR.Using 77 DIB measurements toward the GAC and an assumption of a circular orbit, we determined an updated rest-frame wavelength of DIB λ8621 as λ 0 = 8623.141± 0.030 Å in vacuum, corresponding to 8620.766Å in air, which is perfectly consistent with the result in AS23, but bluer than that in GDR3.Calculated by λ 0 , V DIB in LSR showed a wave pattern with the Galactic longitude, revealing the projected Galactic rotation of the carrier of DIB λ8621.The median V DIB also correlated with the 12 CO velocity structures in the local region, especially for the outer Galactic disk.With the peak-finding method used in AS23 and a narrow-range integration, we compared EW 8621 to the neutral (N H I , from HI4PI Collaboration 2016) and molecular (N H 2 , A199, page 14 of 22 Zhao, H., et al.: A&A, 683, A199 (2024) represented by 12 CO) hydrogen column densities.This was a coarse analysis, but we found that EW 8621 correlated much better with N H I (r p = 0.74) than N H 2 (r p = 0.21), which was consistent with the conclusions for strong optical DIBs in previous studies.
With rigorous quality control, we obtained 179 reliable measurements of DIB λ8648 in individual RVS spectra, which further confirmed this very broad DIB feature.Its EW and central wavelength both presented a moderate linear relation with those of DIB λ8621.The λ 0 of DIB λ8648 was estimated as 8646.31Å in vacuum, corresponding to 8643.93 Å in air, assuming that the carriers of λ8621 and λ8648 are comoving.
By comparing the DIB parameters in GDR3, in AS23, and in this work, we confirmed the impact of the stellar residuals on the DIB measurements in Gaia DR3 argued by AS23.The stellar impact leads to a distortion of the DIB profile, resulting in an underestimation of D 8621 and in an overestimation of σ 8621 .The center of the DIB profile might also be shifted (≲0.5 Å), but EW 8621 was consistent with our new measurements in this work with a median difference of only −0.002 Å and an RMSD of 0.030 Å.
EW 8621 in this work is systematically larger than that in AS23 and the difference further increases with the fit EW.The reason for the difference might be the different ML algorithms and fitting techniques used in the two works.The selection bias of the DIB catalog was clearly revealed by the crossed groups between AS23 and this work.The DIB catalog is very pure, but has a low completeness.In the following works, we will apply more ML algorithms to different survey data and investigate their consistency and/or systematic differences.
2. Section 3 introduces our ML model and the fitting of DIBs λ8621 and λ8648.In Sect.4, we investigate the intensity and kinematic properties of DIB λ8621, analyze the detection of DIB λ8648 in individual RVS spectra, and reassess the results of λ8621 in GDR3.The correlation between the strength and abundance of DIB λ8621, as well as the completeness of the DIB catalog, are discussed in Sect. 5.The main conclusions are summarized in Sect.6. A199, page 2 of 22Zhao, H., et al.: A&A, 683, A199 (2024)
Fig. 2 .
Fig. 2. Two-dimensional distributions of stellar atmospheric parameters (T eff , log g, [M/H]) from GSP-Spec (Recio-Blanco et al. 2023) for the training set, the validation set, the testing set, and the target set.The color represents the number of stars counted in each bin with a size of ∆T eff = 40 K, ∆ log g = 0.05, and ∆[M/H] = 0.04 dex.
Fig. 3 .
Fig. 3. Residuals between the observed and modeled normalized fluxes as a function of the wavelength for the 17 324 RVS spectra in the testing set.The color represents the residuals, and the spectra are sorted by spectral S/N, increasing from bottom to top.S /N = 22, 50, and 100 is indicated by the dashed black, blue, and red lines, respectively.The Ca II line within the DIB window is marked as a dashed green line.
Fig. 5 .
Fig. 5. Examples of the fits to DIBs λ8621 and λ8648 in four ISM spectra.The black and red lines are the ISM spectra and fit DIB profiles, respectively, normalized by the fit linear continuum.The error bars indicate the observational flux uncertainties of the RVS spectra.Orange marks the masked region during the fittings.The Gaia source ID of these targets, E(BP − RP) from Andrae et al. (2023), the EWs of the two DIBs (EW 8621 and EW 8648 ), and the S/N of the ISM spectra are indicated as well.
Fig. 6 .
Fig. 6.Number density of the measured DIB λ8621 as a function of the Gaussian width (σ 8621 ) and the central wavelength (λ 8621 ) of the fit DIB profile for the full target sample (left), 8388 selected DIB measurements (middle), and the reliable measurements in AS23 (right).The color of the left panel represents the number of DIBs calculated in 0.025 Å × 0.1 Å bins.In the middle and right panels, the color represents the number density estimated by a Gaussian KDE.The dashed red line indicates the rest-frame wavelength of DIB λ8621 determined in this work (8623.141Å, see Sect.4.3).
Fig. 7 .
Fig. 7. Distribution of the number of DIB measurements (N DIB , upper panel) and the mean EW 8621 (lower panel) in a Galactic projection for the selected DIB catalog (7619 measurements).The N DIB and mean EW 8621 were calculated in each HEALPixel with a resolution of 1.83 • (N side = 32).
Fig. 8 .
Fig. 8.Comparison between the fit and integrated EW 8621 for 7619 measurements in the DIB catalog.Upper panel: the color represents the number density estimated by a Gaussian KDE.The gray bars show the uncertainty of the fit EW 8621 .The dashed red line traces the one-to-one correspondence.A zoom-in panel shows the distribution of the EW difference (∆EW = EW fit − EW int ).The mean (∆) and standard deviation (σ) of ∆EW are indicated.Lower panel: distribution of the ∆EW as a function of the fit EW 8621 .
Fig. 9 .
Fig. 9. Correlation between EW 8621 and dust reddening for 2957 cases with E(BP − RP) from Andrae et al. (2023) shown in the upper panel and for 4656 cases with A V from Green et al. (2019) shown in the lower panel.The color of the scattered points represents their number density estimated by the Gaussian KDE.The red dots and their color bars are the median values and the standard deviations calculated in each EW 8621 bin with a step of 0.05 Å.The linear fits to EW 8621 and dust reddening from previous works are overplotted as dashed lines: magenta for GFPR, black and red for GDR3, and blue for AS23.
Fig. 10 .
Fig. 10.Observed central wavelengths in vacuum of DIB λ8621 in the heliocentric frame (λ helio ) as a function of the angular distance in longitude from the Galactic anticenter (∆ℓ) for 77 DIB measurements in this work.The black dots are the individual measurements with the fit uncertainties.The red line is the linear fit to the black dots.
log
Fig. 11.Longitude-velocity diagram for DIB λ8621 and 12 CO.Upper panel: variation of the radial velocity of DIB λ8621 carrier (V DIB ) along with the Galactic longitude for 3592 selected DIB measurements.The points are colored by their number density estimated by the Gaussian KDE.The red dots with error bars are the median V DIB calculated in each longitude bin with a step of 10 • .The colored lines are theoretical rotation curves calculated with the rotation model in Reid et al. (2019) with different distances from the Sun.Lower panel: median V DIB superimposed on the longitude-velocity map of 12 CO J = (1−0) emission from Dame et al. (2001).The color scale displays the 12 CO latitudeintegrated intensity on a logarithmic scale.
Fig. 12 .
Fig. 12. Correlations between DIBs λ8621 and λ8648 for 179 selected measurements for (a) fit and integrated EW 8648 outside the masked region between 8660 and 8668 Å; (b) fit EW; (c) measured central wavelength; and (d) FWHM.The dashed green line in panel a traces the one-to-one correspondence.The colored points in panel b are the results from GFPR.The color in panel c represents the number density estimated by the Gaussian KDE, and the red line is a linear fit to all the data points.The Pearson coefficient (r p ) of the correlation between the parameters of λ8621 and λ8648 for the 179 selected measurements is indicated in panels b and c.
Fig. 13 .
Fig. 13.Correlation between EW 8648 and A V from Green et al. (2019) for 93 selected measurements.The underlying points are the results from GFPR, colored by the number density.The Pearson coefficient (r p ) for the red dots is indicated as well.
Fig. 14 .
Fig. 14.Difference in DIB parameters (D 8621 , λ 8621 , σ 8621 , and EW 8621 ) between GDR3, AS23, and this work as a function of the fit values for the joint samples.The gray scale indicates the number density of the data points estimated by the Gaussian KDE.The data are binned with a step of 0.005 for D 8621 , 0.2 Å for λ 8621 , 0.1 Å for σ 8621 , and 0.01 Å for EW 8621 .The solid red lines in each panel represent the median differences in each bin, and the two dashed red lines show the 16th and 84th percentiles.
Fig. 16 .
Fig. 16.Correlation between EW 8621 and N H I (upper panel) and between EW 8621 and N H 2 (lower panel) on a logarithmic scale.The Pearson coefficient (r p ) is indicated.The color scale indicates the number density.The dashed lines are the linear fit results from Friedman et al. (2011), but for DIB λ5780.
ZhaoFig. 17 .
Fig. 17.Distribution of the estimated completeness for DIBs λ8621 (upper panel) and λ8648 (lower panel) as a function of EW and σ DIB based on the results of the injection tests (see Appendix B).
Table 1 .
Statistics of the differences in DIB parameters between GDR3, AS23, and this work. | 15,884.4 | 2024-01-09T00:00:00.000 | [
"Physics"
] |
The Cauchy problem at a node with buffer
We consider the Lighthill-Whitham-Richards traffic flow model
on a network composed by an arbitrary number of incoming and
outgoing arcs connected together by a node with a buffer.
Similar to [15],
we define the solution to the Riemann problem at the node
and we prove existence
and well posedness of solutions to the Cauchy problem,
by using the wave-front tracking technique and the generalized tangent
vectors.
Introduction
Fluid dynamic models were developed in the literature in order to describe the macroscopic evolution of vehicular traffic in roads and in networks. In the network setting, different kinds of solutions at the intersections were recently proposed; see [6,7,8,9,14,15,16,17,20] and the references therein. The interest in this field was also motivated by other applications: data networks [8], supply chains [13], air traffic management [22], gas pipelines [1].
In this paper we consider the scalar Lighthill-Whitham-Richards model (see [19,21]) on a network composed by a single junction with a buffer with finite size and capacity. Nodes with buffers have been introduced in the case of supply chains in [14] and also for car traffic in [12,15]. These kinds of intersections take into account some dynamics inside the junction, described by ordinary differential equations depending on the difference between incoming and outgoing fluxes.
In the following sections, we prove existence and well posedness of solutions at the node with buffer and with an arbitrary number of incoming and outgoing roads. The results are obtained by means of the wave-front tracking method [3,18] and on the generalized tangent vectors [2,5]. In our case, the wave-front tracking method consists in producing piecewise constants approximate solutions both for the density of cars and for the load of the buffer and in proving uniform estimates for the approximate solutions in order to obtain compactness and so existence of solutions. Instead, the Lipschitz continuous dependence of the solution with respect to the initial condition is proved by viewing the vector space L 1 as a Finsler manifold and by considering the evolution in time of generalized tangent vectors along wave-front tracking approximate solutions. We remark that the results contained in [14] do not apply in our situation, while the papers [12,15] describe only special cases of Riemann problems.
The paper is organized as follows. Section 2 contains some preliminary notations and definitions, while Section 3 describes in details the solution of Riemann problems at the node. Sections 4 and 5 deal respectively with the existence of solution and with the continuous dependence of the solution with respect to the initial condition. Finally, we recall in the appendix, for reader's convenience, some technical results of [11].
Basic Definitions and Notations
Consider a node J with n incoming arcs I 1 , . . . , I n and m outgoing arcs I n+1 , . . . , I n+m . We model each incoming arc I i (i ∈ {1, . . . , n}) of the node with the real interval I i =] − ∞, 0]. Similarly, we model each outgoing arc I j (j ∈ {n + 1, . . . , n + m}) of the node with the real interval I j = [0, +∞[. On each arc I l (l ∈ {1, . . . , n + m}), we consider the partial differential equation where ρ l = ρ l (t, x) ∈ [0, ρ max ], is the density of cars, v l = v l (ρ l ) is the mean velocity of cars and f (ρ l ) = v l (ρ l ) ρ l is the flux. Moreover the real valued function r(t) ∈ [0, r max ] denotes the total number of cars in the buffer inside the node J at time t.
We make the following assumptions on the flux f : The definitions of entropic solutions on arcs and weak solutions at the node are as follows.
The Riemann Problem with buffer
Consider a node J with a buffer, whose demand and supply are equal to a constant µ ∈ ]0, max{n, m}f (σ)[. Fix ρ 1,0 , . . . , ρ n+m,0 ∈ [0, 1], r 0 ∈ [0, r max ] and consider the Riemann problem at J f (ρ j (t, 0+)), A solution to the Riemann problem at J is defined in the following way. (7) is a weak solution at J, in the sense of Definition 2, such that ρ l (0, x) = ρ l,0 for every l ∈ {1, . . . , n + m} and for a.e. x ∈ I l and such that r(0) = r 0 .
Definition 3 A solution to the Riemann problem
We introduce the concept of Riemann solver at J.
Definition 4 A Riemann solver RS is a function
satisfying the following 1. for every i ∈ {1, . . . , n}, the classical Riemann problem is solved with waves with negative speed; 2. for every j ∈ {n + 1, . . . , n + m}, the classical Riemann problem is solved with waves with positive speed.
Introduce the following sets 2. for every j ∈ {n + 1, . . . , n + m} define 4. for every s ∈ 0, n+m j=n+1 max O j define In [15], the authors proposed to solve the Riemann problem (7) in the following way.
r(t) = r max r(t) = 0 Figure 1: The solution to the Riemann problem (7): the case Γ inc > Γ out on the left, the case Γ inc < Γ out on the right.
For future use, we need some additional definitions.
Definition 6
We say that a datum ρ i ∈ [0, 1] in an incoming arc is a good datum if ρ i ∈ [σ, 1] and it is a bad datum otherwise. We say that a datum ρ j ∈ [0, 1] in an outgoing arc is a good datum if ρ i ∈ [0, σ] and it is a bad datum otherwise. 7
Wave-front tracking
Since solutions to Riemann problems are given, we are able to construct piecewise constant approximations via the wave-front tracking algorithm; see [3] for the general theory and [10] in the case of networks. Definition 7 Given ε > 0, we say that the maps ρ ε = (ρ 1,ε , . . . , ρ n+m,ε ) and r ε are an ε-approximate wave-front tracking solution to (18) if the following conditions hold.
For every
6. For every l ∈ {1, . . . , n + m}, For every l ∈ {1, . . . , n + m}, consider a sequence ρ 0,l,ν of piecewise constant functions defined on I l such that ρ 0,l,ν has a finite number of discontinuities and lim ν→+∞ ρ 0,l,ν = ρ 0,l in L 1 loc (I l ; [0, 1]). For every ν ∈ N \ {0}, we apply the following procedure. At time t = 0, we solve the Riemann problem at J (according to RS r 0 ) and all Riemann problems in each arc. We approximate every rarefaction wave with a rarefaction fan, formed by rarefaction shocks of strength less than 1 ν travelling with the Rankine-Hugoniot speed. Moreover, if σ is in the range of a rarefaction shock, then its speed is zero. We repeat the previous construction at every time at which interactions between waves or of waves with J happen and at the times when the buffer becomes empty or full.
Remark 3 By slightly modifying the speed of waves, we may assume that, at every positive time t, at most one interaction happens. Moreover, at every interaction timet, exactly one of the following possibilities is verified.
1. Two waves interact in an arc.
A wave reaches the node J and
3. Some waves exit the node J, i.e.
Remark 4 For interactions in arcs, we split rarefaction waves into rarefaction fans just at time t = 0. At the node J, instead, we allow the formation of rarefaction fans at every positive time.
Let us introduce the notions of generation order for waves, of big shocks and of waves with increasing or decreasing flux. We need these definitions in the proof of existence of a wave-front tracking approximate solution and of an uniform bound for the total variation of the flux.
Definition 8 A wave of ρ ε , generated at time t = 0, is said an original wave or a wave with generation order 1.
If a wave with generation order k ≥ 1 interacts with J, then the produced waves are said of generation k + 1.
If a wave with generation order k ≥ 1 interacts in an arc with a wave with generation order k ′ ≥ 1, then the produced wave is said of generation min{k, k ′ }.
If a wave exits the node J at timet > 0 and then it has generation order 2 if in the time interval [0,t[ no wave interacts with J, otherwise it has generation order k + 1, where k is the generation order of a wave, which interacts with J at timet <t, and in the time interval ]t,t[ no wave interacts with J.
Definition 9
We say that a wave (ρ l , ρ r ) in an arc is a big shock if ρ l < σ < ρ r .
Definition 10
We say that a wave (ρ l , ρ r ) interacting with J from an incoming arc has decreasing flux (resp. increasing flux We say that a wave (ρ l , ρ r ) interacting with J from an outgoing arc has
Bound on the total flux variation
Fix a wave-front tracking approximate solution for the Cauchy problem (18). We prove in Corollary 1, that the total variation of the flux is uniformly bounded by a constant which depends on the initial data. We need some preliminary results.
Then exactly one of the following possibilities holds.
1. If r ε (t−) = 0, then some waves are generated at J at timet only in the outgoing arcs.
2. If r ε (t−) = r max , then some waves are generated at J at timet only in the incoming arcs.
Proof. Define by Γ 1± inc , Γ 1± out , Γ ± inc and Γ ± out the values, att− andt+, of the quantities introduced in Section 3. Finally with Γ 1 inc , Γ 1 out , Γ inc and Γ out we denote the values at timet of the quantities introduced in Section 3. By Remark 3, at timet no wave interacts with J. Hypothesis (19) implies that either r ε (t) = 0 or r ε (t) = r max . If We have also Γ + inc = Γ inc = Γ − inc = Γ + out and so Q(t+) = 0. Moreover, no waves exit from the incoming arcs, while by (13) and the fact that r ε (t) = 0 for t in a left neighborhood oft, Γ − out > Γ out and so some waves exit from the outgoing arcs. By Lemma 4 in [11], we deduce that all the waves generated at timet have decreasing flux. This implies that We have also Γ + inc = Γ − out = Γ out = Γ + out and so Q(t+) = 0. Moreover, no waves exit from the outgoing arcs, while by (12) and the fact that r ε (t) = 0 for t in a left neighborhood of t, Γ − inc > Γ inc and so some waves are generated in the incoming arcs. By Lemma 4 in [11], we deduce that all the waves generated at timet have decreasing flux. This implies that This concludes the proof. 2 Lemma 3 Assume that a wave (ρ l , ρ r ) interacts with J at timet and suppose that r ε (t) = 0. Then Υ(t+) = Υ(t−) and TV f (t+) ≤ TV f (t−).
In this case Γ 1+ inc < µ and Γ + inc = Γ 1+ inc . Therefore no wave is generated in I 1 and the waves generated in the other incoming arcs have increasing flux. Moreover Γ + inc = Γ + out < Γ − out . By [11,Lemma 4], the waves generated in the outgoing arcs have decreasing flux and so and the conclusion follows, since Q(t+) = 0.
Γ
inc and Γ − out = Γ + out . Therefore no waves are generated in the outgoing arcs and in I 1 . By [11,Lemma 5], the waves produced in the other incoming arcs have increasing fluxes if f (ρ l ) < f (ρ r ), and decreasing fluxes if f (ρ l ) > f (ρ r ). Thus and we conclude, since Q(t+) = 0.
In this case we have that Γ − inc < µ and so ρ − i ≤ σ for every i ∈ {1, . . . , n}. Since the wave (ρ l , ρ r ) has positive speed, then ρ − 1 = ρ r < ρ l ≤ σ. Therefore, in the incoming arcs, either no waves are produced (in the case Γ 1+ inc ≤ µ) or waves with decreasing flux are generated (in the case Γ 1+ inc > µ). In the outgoing arcs, by (13) we easily deduce that Γ + out ≥ Γ − out , and so either no waves are created or waves with increasing flux are generated, see [11,Lemma 4].
Finally, suppose that the wave (ρ l , ρ r ) interacts with J from an outgoing arc, say I n+1 , and so ρ r ≥ σ and ρ l = ρ − n+1 . In this case Γ − inc = Γ + inc and so no waves are produced in the incoming arcs. There are three different possibilities.
1. Γ + out < Γ − out . In this case Γ + out = Γ 1+ out < Γ + inc and so no wave is generated in I n+1 , while in the other outgoing arcs at most m − 1 waves are generated and they have increasing flux. Therefore and so Υ(t+) = Υ(t−).
2. Γ + out = Γ − out . In this case Γ + out = Γ − out = Γ + inc and no wave is generated in I n+1 . In the other outgoing arcs, at most m − 1 waves are generated.
The proof is similar to that of Lemma 3 and so we omit it.
Proof. We denote by (ρ − 1 , . . . , ρ − n+m ) and by (ρ + 1 , . . . , ρ + n+m ) the states at J respectively before and after the interaction. Define also by Γ 1± inc , Γ 1± out , Γ ± inc and Γ ± out the values, att− andt+, of the quantities introduced in Section 3. Since 0 < r ε (t) < r max in a left neighborhood oft, we have that Assume that the wave (ρ l , ρ r ) interacts with J from an incoming arc; say I 1 . Thus ρ l ≤ σ and ρ r = ρ − 1 . Moreover Γ + out = Γ − out and so no wave is produced in the outgoing arcs. We have three possibilities.
In this case at most n waves are generated in the incoming arcs. If f (ρ l ) < f (ρ r ), then ρ + 1 = ρ l and so no wave is produced in I 1 , while the waves generated in the other incoming arcs have increasing flux, by [11,Lemma 5]. If f (ρ l ) > f (ρ r ), then f (ρ r ) ≤ f (ρ + 1 ) ≤ f (ρ l ) and the waves generated in I 2 , . . . , I n have decreasing flux, by [11,Lemma 5]. Thus by the previous considerations. Moreover Q(t−) = Q(t+) and so we conclude that Υ(t−) ≥ Υ(t+).
The case of the wave (ρ l , ρ r ) interacting with J from an outgoing arc is similar to the previous one. 2 Lemmas 2-5 imply that the functional Υ is decreasing, as stated in the following Proposition.
Proposition 1 For a.e. t > 0, we have that Proof. The functional Υ is piecewise constant in time and it can vary only when two waves interact inside an arc or when a wave hits or exits from the node. If two waves interact in an arc, then TV f is non-increasing and Q remains constant; hence Υ is non-increasing. Consider therefore the case of a wave interacting or exiting from the node at timet. For simplicity, we denote by Γ 1± inc , Γ 1± out , Γ ± inc and Γ ± out the values, at t− andt+, of the quantities introduced in Section 3. At the node, we have the following two cases.
• A wave (ρ l , ρ r ) hits the node at a certain timet. We have three different possibilities.
• A wave exits the node at a certain timet. In this case Lemma 2 states that Υ(t+) = Υ(t−).
The proof is so finished. Corollary 1 For every t > 0, we have that Proof. By Proposition 1, we deduce that for every t > 0. The conclusion follows by the fact that 0 ≤ Q(t) ≤ (n + m)f (σ) for every t ≥ 0.
Existence of a wave-front tracking solution
In this subsection, we prove the existence of a wave-front tracking approximate solution. We have the following proposition, whose proof is very similar to that of [11,Proposition 10]. Here we give the proof for completeness.
Proposition 2 For every ν ∈ N \ {0} the construction in Subsection 4.1 can be done for every positive time, producing an 1 ν -approximate wave-front tracking solution to (18) with respect to the Riemann solver described in Definition 4.
where Kν = 2(n + m)ν. This bound is due to the fact that each wave with generation order k can interact with J and produce at mostν waves with generation order k + 1 in each arc (in the case of rarefactions) and the same can happen at a second time, when the function r ε reaches 0 or r max . Now, there exists 0 < η < T such that no wave with generation order 1 interacts with J in the time interval (T − η, T ). Equation (22) implies also that in (T − η, T ) there is an infinite number of interactions of waves with J. Since waves of generation order 1 do not interact in (T − η, T ), the only possibility is that a wave with generation order k ≥ 2 comes back to J producing waves of order k + 1, some of which come back to J producing waves of order k + 2 and so on. Moreover, by Lemma 4.3.7 of [10] (see the Appendix), if a wave of generation order k ≥ 2 interacts with J from an arc in (T − η, T ), then, after the interaction, the datum in that arc is bad, since the wave can not interact with waves of generation order 1 and come back to J. In an arc a bad datum at J can change only in the following cases: 1. an original wave interacts with J from the arc; 2. a wave, which is a big shock, is originated at J on the arc and the new datum at J is good.
Obviously, in the time interval (T −η, T ) the first possibility can not happen; so only the second possibility may happen. Assume that there exist t 1 , t 2 ∈ (T − η, T ) with t 1 < t 2 such that a big shock is originated at J at time t 1 in an arc and comes back to J at time t 2 . In this arc, the datum before t 1 is bad, since a big shock is originated at time t 1 . Moreover the big shock comes back to J at time t 2 , and so an original wave cannot interact with the big shock; hence the bad datum of the big shock does not change. Therefore, in that arc after the time t 2 , the datum is bad and is the same as the datum before t 1 . Thus every arc I l may take only a precise bad valueρ l , otherwise good values. The key point is that, at every time t ∈ (T − η, T ), there are finitely many possible combinations of bad data at the node J (obtained choosing the arcs which present a bad datum at J, the precise value being fixed). Since the Riemann solvers RS rε(t) are indeed at most three (RS 0 , RS rmax and RSr withr ∈]0, r max [ arbitrary) and since each of them satisfies the property (P1) of [11] (i.e. the image of a Riemann solver depends only on bad data, for a proof see [11,Section 4.2]) we deduce that, for t ∈ (T − η, T ), ρν(t) at J may take only a finite number of values, thus waves produced by J have a finite set of possible velocities. Denote with G the set of all l ∈ {1, . . . , n+m} such that ρν ,l (t, 0) is a good datum for every time t in a left neighborhood of T . Considerl ∈ G. We claim that there exists a constant Cl > 0 such that Nl ,ν (t) ≤ Cl for every time t in a left neighborhood of T . Indeed, the number of different states, which can be produced at J, is finite by the previous considerations. Since all states are good, there is a minimal size of a flux jump along a discontinuity. Then the total number of discontinuities is necessary bounded by Corollary 1. Consider nowl ∈ {1, . . . , n + m} \ G. If ρν ,l (t, 0) is a bad datum for every time t in a left neighborhood of T , then clearly Nl ,ν (t) is uniformly bounded in the same time interval. The other case is that a big shock is originated in the arc Il and comes back to J infinitely many times. We claim that there exists a constant Cl > 0 such that Nl ,ν (t) ≤ Cl for every time t ∈ [t 1 ,t 2 ], wheret 1 andt 2 are the times, at which a big shock respectively is originated at J in Il and comes back to J. In fact, in the time interval ]t 1 ,t 2 [, the datum ρν ,l (t, 0) is good and the number of possible different states between J and the big shock is finite. Therefore, as before, if the number of discontinuity can not be bounded by a constant, then also the total variation of the flux can not be bounded and this is not true, by Corollary 1.
This concludes the proof by contradiction.
Existence of a solution
This part deals with the proof of Theorem 1.
Concerning r ε , Ascoli-Arzelà Theorem guarantees the uniform convergence of a subsequence r ε k → r. Moreover, Dunford-Pettis Theorem implies the weak compactness of {r ′ ε } ε in L 1 ([0, T ]), thus, up to a subsequence, r ′ ε k ⇀ s weakly in L 1 ([0, T ]) and r ′ = s in the weak sense. Thus, passing to the limit in the wave-front tracking approximations, we obtain that (ρ, r) satisfies points 3. and 4. of Theorem 1. 2
Dependence of solutions on initial data
In this section we prove that, for every type of nodes, the solution to (18) depends in a Lipschitz continuous way with respect to the initial condition.
We use the technique of generalized tangent vectors, introduced in [4,5] for hyperbolic systems of conservation laws. A complete description, in the case of scalar conservation laws on networks, is in [11]. Here we just analyze the estimates on the shifts of waves along wave-front tracking approximate solutions at the node. We recall the definition of shift of wave.
Definition 11 Fix ξ ∈ R and a wave (ρ l , ρ r ) of an ε-approximate wave-front tracking solution to (18). We say that ξ forms a shift for the wave (ρ l , ρ r ) if we consider the same ε-approximate wave-front tracking solution, except for the position of the wave (ρ l , ρ r ), which is translated by the quantity ξ in the x-direction.
The proof of the continuous dependence is based on the following lemmas.
If ξ − k is a shift in the wave defined by the statesρ k , ρ − k , then the function r ε becomes .
Lett > t 1 + h be the first time at which r h ε (t) = 0 or r h ε (t) = r max . Thus the waves (ρ + l ,ρ + l ) are shifted in time byt − t 2 = v 2 −v 1 v 2 h. This permits to conclude. Proof. First consider variations in the ρ component of the initial condition. As in the proof of Theorem 17 of [11], we can restrict the study to the evolution of shifts. Fix a timet > 0; we have the following possibilities.
a) No interaction of waves takes place in any arc att and no wave interacts with J. In this case the shifts are constant. b) Two waves interact att on an arc and no other interaction takes place. In this case the norms of the tangent vectors are decreasing by Lemma 2.7.2 of [10].
c) A wave interacts with J at a timet from the arc I k and no other interaction takes place. Denote byρ k = ρ − k the other side of the interacting wave. Using Lemma 6 and its notations, we deduce by Lemmas 3, 4, 5.
d) Waves exit J at a timet > 0 and no other interaction takes place. Definet ∈ [0,t[ in the following way:t = 0 if no interaction at J happens in the time interval (0,t), otherwiset is the time at which a wave reaches J and no other interaction at J happens in the time interval (t,t). Ift = 0 and since no variation in r 0 occurs, then no shift appears. | 6,308.4 | 2012-02-01T00:00:00.000 | [
"Mathematics"
] |
Improving the sensitivity of J coupling measurements in solids with application to disordered materials
It has been shown previously that for magic angle spinning (MAS) solid state NMR the refocused INADEQUATE spin-echo (REINE) experiment can usefully quantify scalar (J) couplings in disordered solids. This paper focuses on the two z filter components in the original REINE pulse sequence, and investigates by means of a product operator analysis and fits to density matrix simulations the effects that their removal has on the sensitivity of the experiment and on the accuracy of the extracted J couplings. The first z filter proves unnecessary in all the cases investigated here and removing it increases the sensitivity of the experiment by a factor ∼1.1–2.0. Furthermore, for systems with broad isotropic chemical shift distributions (namely whose full widths at half maximum are greater than 30 times the mean J coupling strength), the second z filter can also be removed, thus allowing whole-echo acquisition and providing an additional √2 gain in sensitivity. Considering both random and systematic errors in the values obtained, J couplings determined by fitting the intensity modulations of REINE experiments carry an uncertainty of 0.2–1.0 Hz (∼1−10 %).
I. INTRODUCTION
Scalar (J) couplings in NMR are a rich, but relatively under-exploited source of structural information for disordered solids.6][7][8] The impressive achievements of NMR crystallography [9][10][11][12][13][14] and the ab initio methods developed in parallel for the calculation of J couplings 15,16 suggest that a similar experimentalcomputational approach, enhanced by medium-range information from the J couplings, could provide an improved understanding of the structure of non-crystalline solids.2][23][24][25][26][27][28] In particular, the refocused INADEQUATE spin-echo (REINE) experiment has been used to measure distributions of J coupling strengths in cellulose 29 and in a phosphate glass. 30These distributions offer far finer structural insights than do the average couplings revealed by simple spin-echo (J resolved) experiments. 30ore than the protracted nature of the measurements however, it is the lower sensitivity of the multidimensional pulse sequences that is the main barrier to their applicability to a wider range of systems, recent improvements in NMR hardware notwithstanding.In the REINE experiment (Figure 1), the double-quantum filter separates the signal from differently bonded pairs according to the sum of their individual chemical shifts.The refocusing spin-echo increases the efficiency of the sequence for disordered solids, 1 while the final spin-echo element in the sequence encodes the chemical shift-separated signals with the J coupling strength of the corresponding pairs.The z filters either side of the modulating spin-echo ensure that only the in-phase NMR signal is acquired during the detection period.An obvious way to improve the sensitivity of the REINE experiments is simply to remove the z filters.The present report demonstrates that these elements of the pulse sequence are not essential to the accuracy of the associated measurements.In this context, the different sources of error and the intrinsic precision of the spin-echo approach to measuring scalar couplings are also discussed.The different intensity modulations predicted from a product operator analysis of the different versions of the REINE experiment are verified for simulated crystalline, disordered and glassy systems.
A. Product operator analysis
The J-modulating spin echo in the REINE pulse sequence (Figure 1) is flanked by two z filters, 31 whose role is to suppress unwanted anti-phase contributions to the spectra.To ascertain the nature of these distortions, a product operator 32 analysis is presented, starting from the previously published results for the refocused INADEQUATE experiment. 33he calculations consider isolated spin-1/2 pairs (A and B) interacting via a J coupling under solution-like conditions.That is, (i) the spinning frequency is high enough to completely eliminate homonuclear dipolar interactions (over an integer number of rotor cycles), (ii) rotational resonance conditions are avoided, allowing anisotropic chemical shift interactions to be neglected, 34 and (iii) the decoupling is efficient enough (or the proton content low enough) for heteronuclear dipolar couplings to be neglected.(This is typically the case for 31 P or 29 Si nuclei in oxide glasses, for instance.)The density matrix at the end of the refocused INADEQUATE sequence is 33 where M A 0 and M B 0 are the magnitudes of the transverse magnetization on spins A and B, respectively, created by the initial pulse.The following abbreviations have been introduced for the build-up of the in-phase (A y , B y ) and anti-phase (2A x B z , 2A z B x ) single-quantum coherences over the four τ delays in the sequence (see Figure 1): Guerry, Brown, and Smith AIP Advances 6, 055008 (2016) The same abbreviation scheme is used below for the sine and cosine evolution of the operators during the τ j and t 2 delays.Assuming that the parameters of the first z filter are set so as not affect the in-phase signal, i.e. the delay is not unnecessarily long and the phase cycle removes the anti-phase terms, 33 Equation (1) becomes, with Z 1 = 0 for a perfect z filter, Z 1 = 1 in the absence thereof.Both in-phase and anti-phase coherences evolve under the scalar coupling during the J-modulating spin echo, Gathering the terms in Equations ( 5) and ( 6), and accounting for the effect (Z 2 ) of the second z filter on the anti-phase coherences, the detectable 39 density matrix evolves during the detection period as: Table I compares the τ build-up, z-filter attenuation, and τ j evolution of the different terms in Eq. (7).Considering first the build-up of each of the four terms, under solution-like conditions, optimal INADEQUATE efficiency will be achieved for Jτ = 1/4 and terms 2 and 3 disappear.In this case, the cosine J-modulation of the REINE signal (term 1) will be observed as long as the second z filter removes anti-phase contributions to the spectra (term 4).In solids however, as described previously by Lesage et al., 1 transverse (T ′ 2 ) dephasing means that the optimal value of Jτ (Jτ opt ) is no longer the root of the cosine build-up terms (see Figure S1 in the supplementary material). 35These therefore become important and contribute sine J-modulated in-phase REINE intensity (term 2 in Table I).In other words, the first z filter eliminates a sizeable proportion of the total in-phase signal.With applications to disordered materials (broad lineshapes) in mind, the hypothesis investigated here regarding the second z filter (terms 3 and 4) is that integrated intensity from the anti-phase doublets tends to zero when the chemical shift distribution is much broader than the J splitting.
B. Sensitivity
Ignoring anti-phase terms, which make a zero net contribution to the spectrum on average, the total intensity accumulated in a REINE experiment at a frequency Ω in the ω 2 domain is where G (Ω; ω 2 ) is the functional form of the NMR spectrum, which for a powder sample under magic angle spinning (MAS) is typically a Gaussian distribution of overlapping Lorentzians.Note that the optimal value of τ will in practice be defined by JT ′ 2 , as described in the supplementary material. 35Comparing now versions of the REINE experiment run under identical conditions, but either with or without the second z filter, the sensitivity ratio between the two can be written where the absolute value taken in Equation ( 9) ensures that positive and negative intensities are accounted for equivalently.Evaluating Equation ( 9) for different (reasonable) values of JT ′ 2 (Table II and Figure S1a) 35 shows that omitting the second z filter is particularly effective when the J modulation occurs slowly with respect to the decay of the spin-echo.For the values of JT ′ 2 considered here, the simplified version of the REINE experiment is, according to this analysis, up to twice as sensitive.
A. Strategy
Figure 2 illustrates the strategy adopted here to investigate the viability of the proposed pulse sequence simplifications.One-dimensional (1D) REINE spectra are simulated for 10 values of τ j (Figure 2(a)).The evolution under the J coupling of each point in the spectrum is then fit using an appropriate function (Figure 2(b), 2(c)), based on the theoretical analysis presented in Section II.(This approach reproduces the experimental workflow for measuring J coupling distributions using the REINE sequence, 29,30 the simulated spectra corresponding to slices of 2D spectra) The values obtained from the fits are then compared with those initially simulated (Figure 2(d)).
The spectra consist of two Gaussian distributions of n overlapping J doublets, whose splitting is correlated with the chemical shift of the simulated atom pairs.A third atom is included in the simulations, coupled through space, but not bonded to the other two.Starting from parameters representing the interactions between phosphate groups in a glass, several series of simulations are performed for different Gaussian linewidths (with n increasing/decreasing in proportion) to evaluate the applicability of the proposed simplifications for a variety of physical systems.The three versions of the experiment considered here (viz.with both z filters, only the second, or no z filters) are denoted A, B and C, respectively, from now on.Note that rotational resonance and multi-spin effects are not considered here.
B. Simulation parameters
The different versions of the REINE NMR experiment were simulated using SIMPSON 4.1.1, 36or sets of three dipolar coupled 31 P nuclei with different isotropic chemical shifts.For 10 values of a The optimal (in terms of sensitivity) build-up time (τ) for the refocused INADEQUATE block (see Figure 1).b The value of τ j at the zero-crossing (ZC)-at which term 1 + term 2 = 0 (see Table I).c See Equation (3).d Equation ( 9) evaluated for the values of τ max j listed in this table.The normalized absolute sensitivity of the refocused INADEQUATE spin-echo experiment with the second z filter is shown in parentheses (see also Figure S1a).τ j (0-36 ms, in 4 ms increments), simulations of experiments A, B and C were repeated n times, then co-added with Gaussian weighting, with the isotropic chemical shift and scalar coupling of the 31 P nuclei increased linearly, the latter from 11.5 to 26.5 Hz, and n being proportional to the width of the chemical shift distribution.The dipolar and (mean) scalar couplings between the spins were set to the values determined by Fayon et al. for crystalline TiP 2 O7. 34The range of the J coupling distribution was defined based on the ones measured experimentally for a cadmium phosphate glass. 30The chemical shift tensors and distributions of isotropic shifts were defined so as to obtain representative models of crystalline and glassy lead metaphosphates, 37 and a magnesium metaphosphate glass. 38The spin system and chemical shift distribution parameters used are listed in Table SI in the supplementary material, 35 along with a representative SIMPSON input file.Experiments B and C were simulated under 12.5 kHz magic angle spinning.For experiment A however, since the simulations yield identical results whether the anisotropic interactions are considered or not (Table SII), 35 time consuming magic angle spinning calculations were not required.
C. Analysis
The evolution under the scalar coupling of each point in the resulting spectra was fit using either for experiment A, or for experiments B and C, after normalizing the data and adding Gaussian noise (σ = 0.05) and an exponential decay (T ′ 2 = 15 or 45 ms).The results obtained using custom-written Python 2.7 scripts were confirmed by using the fit command in Gnuplot 4.6.6.for a subset of the data.Fitting errors and the correlations between the fitting parameters were obtained from the associated covariance matrix.
Equation ( 9) was evaluated for the values of τ max j listed in Table II, by calculating the values of the numerator and denominator for 10,000 points equally spaced from τ j = 0 to τ max j .(Doubling the number of points considered does not modify the sensitivity ratio.)
A. Simplified experiments
Comparing parts (b) and (c) of Figure 2 highlights the additional presence of term 2 (Table I) in the J modulation of the REINE intensity in the absence of z filters.Figure 3 shows that this sine modulation can be accounted for accurately by modifying the fitting function according to the product operator analysis (i.e. by using Equation ( 11) instead of Equation ( 10)).Indeed for all the linewidths considered here, the scalar couplings fit to the intensity modulations of experiment B are at least as accurate as those obtained from experiment A, the mean errors in the fitted values being low (< 0.2 Hz) for all but the broadest chemical shift distribution.(As discussed at greater length below, the fitted values diverge more substantially for the broadest line because of peak overlap.)With experiment C in contrast, the discrepancies between the simulated and fitted scalar couplings are large (> 0.6 Hz on average) for the narrowest line, but decrease sharply as the disorder of the simulated system increases (mean error = 0.21 and 0.26 Hz for FWHM of 4.8 and 7.5 ppm, respectively).
This effect is emphasized in Figure S2, which compares the scalar coupling distributions obtained from simulations of experiments A and C, with no noise added, for different widths of the chemical shift distribution. 35Figure S3 confirms that the deviations stem from the anti-phase terms FIG. 3. Mean error, as a function of the full width at half maximum (FWHM) of the underlying chemical shift distribution, in the J coupling strengths fit to the intensity variations of refocused INADEQUATE spin-echo spectra.Gaussian noise with a standard deviation of 0.05 was added to the separate intensity modulations (normalized to a maximum of 1.0) obtained for each point in the frequency domain.The error bars, representing the standard deviation of five repeat fits, are smaller than the point markers.The physical systems corresponding to each FWHM are described in Table SI in the supplementary material. 353 and 4 in Table I), whose integrated intensity decreases markedly (by up to ∼70 %) as the chemical shift distribution broadens.35 Note that although the specific values quoted here for the errors are parameter-dependent, the trends with respect to the linewidths and between the different versions of the REINE experiment are reproduced for a wide range of values-namely, of the standard deviation of the Gaussian noise (Figure S4a,b), the MAS frequency and magnitude of the dipole-dipole couplings (Figure S5a-c), and the range of the J coupling distribution (Figure S5d).35 In particular, although the zero crossing in the intensity modulation occurs later for experiments B and C, these in fact yield more (not less) accurate values than experiment A when T ′ 2 is shorter (Figure S4c).Furthermore, the fitting variables remain uncorrelated throughout (Table SIII).In summary, provided the chemical shift distribution is very broad with respect to the mean J coupling (FWHM > 30 × J), the REINE experiment can be performed without z filters without compromising on the accuracy of the results.In line with the predicted gains in sensitivity (+11-102 %, see Table II), the removal of the first z filter makes experiments B and C 1.23 times more sensitive than experiment A, as measured from the total integrated intensity for the set of simulations whose results are summarized in Figure 3. Experiment C can moreover be run with whole-echo acquisition, leading to an extra √ 2 gain in sensitivity.
B. Sources of error
The mean errors in the fitted J coupling strengths range from ∼0.2 to ∼0.7 Hz (Figure 3) but the fitting errors range only from 0.2 to 0.3 Hz (see Figure 2(b), 2(c) for example).Indeed, for all versions of the REINE experiment indeed, the modulation of a given point in the spectrum is an average of the intensity variations of several overlapping doublets.This is illustrated in Figure S2a (for which no noise was added to the simulated data), 35 where the fitted values are slightly overestimated to the left of the peak maximum, and underestimated to the right, where the neighboring doublets toward the center of the distribution have respectively a greater and smaller splitting than that of the underlying doublet, were it to be considered in isolation.Averaging induces more substantial errors when two (or more) chemical shift distributions overlap, as the difference between the mean splitting and those of the individual doublets is likely much greater.Nevertheless, the simulations performed here for overlapping lines suggest that even for large (>15 Hz) differences between the overlapping doublets, ±1 Hz is a conservative estimate of the systematic error induced.
Finally, should one wish to trade accuracy for sensitivity and use experiment C for relatively well ordered systems, these results suggest that the distortions from the anti-phase contributions can be covered by simply adding 0.4 Hz to the errors output by the fitting algorithm.
V. CONCLUSION
The simplifications and the corresponding gains in sensitivity discussed in this paper should make the precise measurement of near-pair-specific scalar couplings more routine in a number of disordered systems.In practice furthermore, omitting the two z-filter elements should make the REINE sequence easier to implement, with a higher tolerance for slightly imperfect pulse powers or durations.Comparing experimental values with those calculated from first principles 15,16 should allow both a refinement of the algorithms and, eventually, a better understanding of the bonding and structural variations in non-crystalline solids, possibly in combination with the precise measures of coherence lifetimes that this approach provides.
055008- 5 FIG
FIG. 2. (a)Simulated refocused INADEQUATE spin-echo (REINE) spectra with no z filters and the values of τ j (see Figure1) listed in the inset.(b, c) Fits (solid lines) of the evolution as a function of τ j of the intensity at −8.2 ppm in REINE spectra simulated (b) with two z filters and (c) without z filters.From top to bottom, the values listed on the right of panels (b) and (c) are the simulated and fitted J coupling, and the correlation coefficients between the fitting parameters.Gaussian noise with a standard deviation of 0.05 was added to the separate intensity modulations (vs.τ j , normalized to a maximum of 1.0) obtained for each point in the frequency domain.(d) Simulated (thick black line) and fitted (red dots) J coupling strengths, above the corresponding REINE spectrum (two z filters) at τ j = 0.
FIG. 1. Pulse sequence and coherence transfer pathway diagram for the refocused INADEQUATE spin-echo (REINE) sequence.Three versions of the experiment are considered here (viz.with both z filters, only the second, or no z filters), and are denoted A, B and C, respectively.
TABLE I .
List of terms contributing to refocused INADEQUATE spin-echo spectra.
TABLE II .
Sensitivity of the refocused INADEQUATE spin-echo experiment without versus with the second z filter. | 4,271.4 | 2016-05-04T00:00:00.000 | [
"Materials Science",
"Physics"
] |